Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Threat. Show all posts

Technology Meets Therapy as AI Enters the Conversation

 


Several studies show that artificial intelligence has become an integral part of mental health care, changing the way practitioners deliver, document, and conceptualise therapy over the years, as well as how professionals are implementing, documenting, and even conceptualising it. Psychiatrists associated with the American Psychiatric Association were found to be increasingly relying on artificial intelligence tools such as ChatGPT, according to a 2023 study. 

In general, 44% of respondents reported that they were using the language model version 3.5, and 33% had been trying out version 4.0, which is mainly used to answer clinical questions. The study also found that 70% of people surveyed believe that AI improves or has the potential to improve the efficiency of clinical documentation. The results of a separate study conducted by PsychologyJobs.com indicated that one in four psychologists had already begun integrating artificial intelligence into their practice, and another 20% were considering the idea of adopting the technology soon. 

AI-powered chatbots for client communication, automated diagnostics to support advanced treatment planning and natural language processing tools to analyse text data from patients were among the most common applications. As both studies pointed out, even though the enthusiasm for artificial intelligence is growing, there has also been a concern raised about the ethical, practical, and emotional implications of incorporating it into therapeutic settings, which has been expressed by many mental health professionals. 

Therapy has traditionally been viewed as an extremely personal process that involves introspection, emotional healing, and gradual self-awareness as part of a process that is deeply personal. Individuals are provided with a structured, empathetic environment in which they can explore their beliefs, behaviours, and thoughts with the assistance of a professional. However, the advent of artificial intelligence, which is beginning to reshape the contours of this experience, is changing the shape of this experience.

It has now become apparent that ChatGPT is positioned as a complementary support in the therapeutic journey, providing continuity between sessions and enabling clients to work on their emotional work outside the therapy room. The inclusion of these tools ethically and thoughtfully can enhance therapeutic outcomes when they are implemented in a manner that reinforces key insights, encourages consistent reflection, and provides prompts that are aligned with the themes explored during formal sessions. 

It is important to understand that the most valuable contribution AI has to offer in this context is that it is able to facilitate insight, enabling users to gain a clearer understanding of how people behave and feel. The concept of insight refers to the ability to move beyond superficial awareness into the identification of psychological problems that arise from psychological conditions. 

One way to recognise one's tendency to withdraw during times of conflict is to recognise that it is a fear of emotional vulnerability rooted in past experiences, so understanding that this is a deeper level of self-awareness that can change life. This sort of breakthrough may often happen during therapy sessions, but it often evolves and crystallises outside the session, as a client revisits a discussion with their therapist or is confronted with a situation in their daily lives that brings new clarity to them. 

AI tools can be an effective companion in these moments. This therapy extends the therapeutic process beyond the confines of scheduled appointments by providing reflective dialogue, gentle questioning, and cognitive reframing techniques to help individuals connect the dots. It is widely understood that the term "AI therapy" entails a range of technology-driven approaches that aim to enhance or support the delivery of mental health care. 

At its essence, it refers to the application of artificial intelligence in therapeutic contexts, with tools designed to support licensed clinicians, as well as fully autonomous platforms that interact directly with their users. It is commonly understood that artificial intelligence-assisted therapy augments the work of human therapists with features such as chatbots that assist clients in practicing coping mechanisms, mood monitoring software that can be used to monitor mood patterns over time, and data analytics tools that provide clinicians with a better understanding of the behavior of their clients and the progression of their treatment.

In order to optimise and personalise the therapeutic process, these technologies are not meant to replace mental health professionals, but rather to empower them. On the other hand, full-service AI-driven interventions represent a more self-sufficient model of care in which users can interact directly with digital platforms without any interaction with a human therapist, leading to a more independent model of care. 

Through sophisticated algorithms, these systems will be able to deliver guided cognitivbehaviouralal therapy (CBT) exercises, mindfulness practices, or structured journaling prompts tailored to fit the user's individual needs. Whether AI-based therapy is assisted or autonomous, AI-based therapy has a number of advantages, including the potential to make mental health support more accessible and affordable for individuals and families. 

There are many reasons why traditional therapy is out of reach, including high costs, long wait lists, and a shortage of licensed professionals, especially in rural areas or areas that are underserved. Several logistical and financial barriers can be eliminated from the healthcare system by using AI solutions to offer care through mobile apps and virtual platforms.

It is essential to note that these tools may not completely replace human therapists when dealing with complex or crisis situations, but they significantly increase the accessibility of psychological care, enabling individuals to seek help despite facing an otherwise insurmountable barrier to accessing it. Since the advent of increased awareness of mental health, reduced stigma, and the psychological toll of global crises, the demand for mental health services has increased dramatically in recent years. 

Nevertheless, there has not been an adequate number of qualified mental health professionals available, which has left millions of people with inadequate mental health care. As part of this context, artificial intelligence has emerged as a powerful tool in bridging the gap between need and accessibility. With the capability of enhancing clinicians' work as well as streamlining key processes, artificial intelligence has the potential to significantly expand mental health systems' capacity in the world. This concept, which was once thought to be futuristic, is now becoming a practical reality. 

There is no doubt that artificial intelligence technologies are already transforming clinical workflows and therapeutic approaches, according to trends reported by the American Psychological Association Monitor. AI is changing how mental healthcare is delivered at every stage of the process, from intelligent chatbots to algorithms that automate administrative tasks, so that every stage of the process can be transformed by it. 

A therapist who integrates AI into his/her practice can not only increase efficiency, but they can also improve the quality and consistency of the care they provide their patients with The current AI toolbox offers a wide range of applications that will support both clinical and operational functions of a therapist: 

1. Assessment and Screening

It has been determined that advanced natural language processing models are being used to analyse patient speech and written communications to identify early signs of psychological distress, including suicidal ideation, severe mood fluctuations, or trauma-related triggers that may indicate psychological distress. It is possible to prevent crises before they escalate by utilising these tools, which facilitate early detection and timely intervention. 

2. Intervention and Self-Help

With the help of artificial intelligence-powered chatbots built around cognitive behavioural therapy (CBT) frameworks, users can access structured mental health support at their convenience, anytime, anywhere. There is a growing body of research that suggests that these interventions can result in measurable reductions in the symptoms of depression, particularly major depressive disorder (MDD), often serving as an effective alternative to conventional treatment in treating such conditions. Recent randomised controlled trials support this claim. 

3. Administrative Support 

Several tasks, often a burden and time-consuming part of clinical work, are being streamlined through the use of AI tools, including drafting progress notes, assisting with diagnostic coding, and managing insurance pre-authorisation requests. As a result of these efficiencies, clinician workload and burnout are reduced, which leads to more time and energy available to care for patients.

4. Training and Supervision 

The creation of standardised patients by artificial intelligence offers a revolutionary approach to clinical training. In a controlled environment, these realistic virtual clients provide therapists who are in training the opportunity to practice therapeutic techniques. Additionally, AI-based analytics can be used to evaluate session quality and provide constructive feedback to clinicians, helping them improve their skills and improve their overall treatment outcomes.

Artificial intelligence is continuously evolving, and mental health professionals must stay on top of its developments, evaluate its clinical validity, and consider the ethical implications of their use as it continues to evolve. Using AI properly can serve as a support system and a catalyst for innovation, ultimately leading to a greater reach and effectiveness of modern mental healthcare services. 

As artificial intelligence (AI) is becoming increasingly popular in the field of mental health, talk therapy powered by artificial intelligence is a significant innovation that offers practical, accessible support to individuals dealing with common psychological challenges like anxiety, depression, and stress. These systems are based on interactive platforms and mobile apps, and they offer personalized coping strategies, mood tracking, and guided therapeutic exercises via interactive platforms and mobile apps. 

In addition to promoting continuity of care, AI tools also assist individuals in maintaining therapeutic momentum between sessions, instead of traditional services, when access to traditional services is limited, by allowing them to access support on demand. As a result, AI interventions are more and more considered complementary to traditional psychotherapy, rather than replacing it altogether. These systems combine evidence-based techniques with those of cognitive behavioural therapy (CBT) and dialectical behaviour therapy (DBT) to provide evidence-based techniques.

With the development of these techniques into digital formats, users can engage with strategies aimed at regulating emotions, reframing cognitive events, and engaging in behavioural activation in real-time. These tools have been designed to be immediately action-oriented, enabling users to apply therapeutic principles directly to real-life situations as they arise, resulting in greater self-awareness and resilience as a result. 

A person who is dealing with social anxiety, for example, can use an artificial intelligence (AI) simulation to gradually practice social interactions in a low-pressure environment, thereby building their confidence in these situations. As well, individuals who are experiencing acute stress can benefit from being able to access mindfulness prompts and reminders that will help them regain focus and ground themselves. This is a set of tools that are developed based on the clinical expertise of mental health professionals, but are designed to be integrated into everyday life, providing a scalable extension of traditional care models.

However, while AI is being increasingly utilised in therapy, it is not without significant challenges and limitations. One of the most commonly cited concerns is that there is no real sense of human interaction with the patient. The foundations of effective psychotherapy include empathy, intuition, and emotional nuance, qualities which artificial intelligence is unable to fully replicate, despite advances in natural language processing and sentiment analysis. 

AI interactions can be deemed impersonal or insufficient by users seeking deeper relational support, leading to feelings of isolation or dissatisfaction in the user. Additionally, AI systems may be unable to interpret complex emotions or cultural nuances, so their responses may not have the appropriate sensitivity or relevance to offer meaningful support.

In the field of mental health applications, privacy is another major concern that needs to be addressed. These applications frequently handle highly sensitive data about their users, which makes data security an extremely important issue. Because of concerns over how their personal data is stored, managed, or possibly shared with third parties, users may not be willing to interact with these platforms. 

As a result of the high level of transparency and encryption that developers and providers of AI therapy must maintain in order to gain widespread trust and legitimacy, they must also comply with privacy laws like HIPAA or GDPR to maintain a high level of legitimacy and trust. 

Additionally, ethical concerns arise when algorithms are used to make decisions in deeply personal areas. As a result of the use of artificial intelligence, biases can be reinforced unintentionally, complex issues can be oversimplified, and standardised advice is provided that doesn't reflect the unique context of each individual. 

In an industry that places a high value on personalisation, it is especially dangerous that generic or inappropriate responses occur. For AI therapy to be ethically sound, it must have rigorous oversight, continuous evaluation of system outputs, as well as clear guidelines to govern the proper use and limitations of these technologies. In the end, while AI presents several promising tools for extending mental health care, its success depends upon its implementation, in which innovation, accuracy, and respect for individual experience are balanced with compassion, accuracy, and respect for individuality. 

When artificial intelligence is being incorporated into mental health care at an increasing pace, it is imperative that mental health professionals, policy makers, developers, and educators work together to create a framework to ensure that the application is conducted responsibly. It is not enough to have technological advances in the field of AI therapy to ensure its future, but it is also important to have a commitment to ethical responsibility, clinical integrity, and human-centred care in the industry. 

A major part of ensuring that AI solutions are both safe and therapeutically meaningful will be robust research, inclusive algorithm development, and extensive clinician training. Furthermore, it is critical to maintain transparency with users regarding the capabilities and limitations of these tools so that individuals can make informed decisions regarding their mental health care. 

These organisations and practitioners who wish to remain at the forefront of innovation should prioritise strategic implementation, where AI is not viewed as a replacement but rather as a valuable partner in care rather than merely as a replacement. Considering the potential benefits of integrating innovation with empathy in the mental health sector, people can make use of AI's full potential to design a more accessible, efficient, and personalised future of therapy-one in which technology amplifies the importance of human connection rather than taking it away.

Cybercrime Syndicate Escalates Global Threat Levels

 


During a time when global cybersecurity is experiencing rapid evolution, malicious actors are also employing new methods to accomplish their goals. As part of International Anti-Ransomware Day, leading cybersecurity company KnowBe4 is announcing a critical warning about a looming threat that could change the face of cyberattacks - agentic AI-powered ransomware. 

It has been predicted by KnowBe4, known for its comprehensive approach to human risk management, that a new wave of cyber threats dominated by autonomous artificial intelligence agents is just around the corner. This type of AI-enabled ransomware, referred to as the "agent AI ransomware," is designed to carry out every phase of the ransomware attack independently, with an increased degree of speed, precision, and adaptability. 

The agentic AI ransomware platform deploys intelligent bots capable of automating all aspects of an attack lifecycle, as opposed to traditional ransomware attacks, which typically follow a linear and often manual process. In addition to gaining access to systems, these bots have the capability of performing sophisticated environmental analyses, detecting vulnerabilities and then executing a series of escalating attacks, all in the hope that cybercriminals can maximise the financial gains they can make. 

Increasingly sophisticated and automated cyber attacks are not only allowing criminals to expand their reach and scale but are also shrinking the response window at the preemptive level for defenders to respond. This warning comes at a time when the demand for and payouts of ransomware have surged dramatically in recent years. 

A report released by the International Anti-Ransomware Day in 2024 highlights the alarming increase in ransom payments resulting from such attacks worldwide, which are increasingly affecting organisations around the world. During this year's International Anti-Ransomware Day, which is marked annually to raise awareness about the devastating effects of ransomware and promote cyber hygiene practices, enterprises as well as individuals are reminded to strengthen their cyber defences to prevent the spread of such infections. 

There is no denying that artificial intelligence remains a double-edged sword in cybersecurity, and it is imperative to take proactive measures, train employees, and use adaptive technologies to combat this danger. Recently, several of the country's most iconic retailers have been the victim of sophisticated ransomware campaigns carried out by a cybercriminal group known as DragonForce, which has been troubling in this respect. Several high-profile companies were reported to have been compromised, including Co-Op, Harrods, and Marks & Spencer — all of which had suffered serious data breaches involving the theft and encryption of sensitive customer data. 

Although the ransom demands haven't been disclosed yet, there are urgent concerns regarding the identity of this emerging threat actor and how they are executing these attacks. As a result of recent law enforcement operations that led to the arrests of five suspected members of the notorious cybercrime group known as Scattered Spider, researchers believe DragonForce is connected with Scattered Spider, which has been under increased scrutiny.

According to experts at Check Point Research, DragonForce is a ransomware cartel that began operating in late 2023 and is now referred to as a “ransomware cartel.” There has been speculation that the group's origins go back to Malaysian hacktivist collectives, but since then, the group has grown into an extremely organised cybercriminal organisation. As part of DragonForce's ransomware-as-a-service (Raas) business model, the company provides malicious tools to affiliates in exchange for a share of the ransom, usually around 20% of the ransom. 

By utilising this model, cybercriminals of all levels can create customised ransomware attacks, regardless of their technical skill level. Moreover, this group also facilitates the creation of data leak websites, which are used when attackers want to publicly disclose stolen information when victims don't want to pay their ransoms. As a result of offering an anonymised approach, operational flexibility, and a promise of a high level of financial return, DragonForce has evolved into one of the most effective ways to perpetrate digital extortion on a global scale. 

There is still a lot going on after the DragonForce ransomware attacks, with Co-op confirming that cybercriminals were able to access a considerable number of its members' personal data. While the company has previously maintained that the incident would only have a relatively minor effect on all aspects of its operations and that proactive cybersecurity measures are in place to guard against such threats, the scale and nature of the breach appear to be greater than initially expected. 

It is important to note that despite reassurances from Co-op that no customer data has been compromised, concerns remain elevated amid the attackers' claims that they have been able to obtain the personal information of up to 20 million people linked with its membership scheme, a figure that has been rejected by the company as inaccurate. There have been several claims by the threat actors behind this attack, operating under the alias DragonForce, for an ongoing attack on Marks & Spencer as well as an attempted intrusion into Harrods' systems. 

One of the striking revelations the hackers made was when they shared screenshots with the media outlet. This screenshot shows them contacting the COOP's head of cybersecurity via an internal communication platform on April 25, suggesting an alarming level of access and coordination that hasn't been reported before. It has been widely reported that senior government officials have urged businesses to make cybersecurity a top priority in response to the wave of attacks on major retailers. 

A major emphasis of Minister Pat McFadden's speech was that digital resilience was of paramount importance, stating that the complexity and frequency of such threats require constant vigilance across both the public and private sectors to protect against them. According to cybersecurity experts, organisations should strengthen their digital defences in light of recent attacks attributed to DragonForce and its suspected affiliate Scattered Spider.

In a recent announcement, Google's Mandiant cyber intelligence division has issued a series of strategic recommendations aimed at helping companies that are at risk of intrusions to mitigate those risks. As part of the recommendations, Mandiant highlights enhanced training for helpdesk personnel, often exploited through social engineering tactics as entry points for threat actors. 

Mandiant emphasizes also the necessity of implementing strong, multi-factor authentication protocols and maintaining comprehensive visibility across all IT environments, and underscores the importance of implementing strong, multi-factor authentication protocols. It notes that these measures are essential for identifying and neutralising threats before they grow into a full-scale ransomware attack, as the firm notes.

As cybercriminals are becoming increasingly sophisticated and persistent in exploiting human and technological vulnerabilities to breach even the most secure organisations, this guidance reflects growing concerns about cybercrime. Several facts have emerged regarding the Co-op data breach, and as these facts become more and more apparent, the severity of the cyberattack orchestrated by DragonForce has become increasingly evident as time goes by. 

Several members of Co-op’s executive committee are alleged to have been contacted by the hackers to escalate their extortion efforts. According to the hackers, they obtained sensitive information from internal systems of the Co-op. Several materials were reportedly accessed by the company, including internal communications, employee login credentials, and a sample database containing personal information such as names, address information, e-mail addresses, telephone numbers, and membership card numbers of 10,000 customers. 

It has since been confirmed that member information had been compromised by the company, but the company made it clear that passwords, financial information, and transaction details had not been compromised. In response to this, Co-op has taken more serious security measures. To prevent further unauthorised access, the organisation has instructed staff to keep cameras on during virtual meetings, to restrict recording and transcription, and to verify participants' identities. 

It seems that these protocols are a direct response to the attackers taking advantage of the internal collaboration tools of the company. It has several supermarkets is over 2,500, it has 800 funeral homes, an insurance company, as well as approximately 70,000 employees nationwide, so it is under tremendous pressure to rebuild trust and strengthen its digital defences. A well-known ransomware group operating under a ransomware-as-a-service (Raas) model, DragonForce, is still unable to share information on the plans it has for the stolen data if its is are not met with its demands. 

There is no clear indication of their affiliations, but their tactics closely match those of a loosely coordinated hacker group known as Scattered Spider or Octo Tempest. This group has young members, English-speaking actors who communicate through platforms such as Telegram and Discord. As an unusual twist in this attack, the individuals behind it have adopted aliases reminiscent of the characters from the American crime series Blacklist, stating ominously that they will be placing UK retailers on the Blacklist. 

It is important to note that even though the group declined to comment on the impact of their actions or how they were attacking other retailers such as Marks & Spencer and Harrods, their silence only furthers the uncertainty surrounding their motives. According to a statement issued by Co-op, the company will now be collaborating with the National Cyber Security Centre (NCSC) and the National Crime Agency (NCA) to resolve the situation. 

In light of the continuing increase in the threat of ransomware, this incident serves as a stark reminder that all organisations, especially those dealing with sensitive consumer data, must prioritise cybersecurity as part of their operational strategy. In the aftermath of the DragonForce cyberattack, organisations need to consider cybersecurity as a core business priority rather than a technical afterthought, as it underscores the importance of doing so. 

The threat of ransomware has become more advanced and accessible, which calls for companies to adopt a proactive approach - integrating cybersecurity into strategic plans, training employees, and implementing adaptive, layered defence techniques. For data protection standards to be strengthened and breach reporting to be more transparent, regulatory bodies must also be pushed by lawmakers to strengthen data protection standards. 

A world where data is increasingly digitised makes securing and maintaining trust even more imperative; it is a prerequisite for operating continuity and long-term credibility in an increasingly digital environment.

Cybercrime in 2025: AI-Powered Attacks, Identity Exploits, and the Rise of Nation-State Threats

 


Cybercrime has evolved beyond traditional hacking, transforming into a highly organized and sophisticated industry. In 2025, cyber adversaries — ranging from financially motivated criminals to nation-state actors—are leveraging AI, identity-based attacks, and cloud exploitation to breach even the most secure organizations. The 2025 CrowdStrike Global Threat Report highlights how cybercriminals now operate like businesses. 

One of the fastest-growing trends is Access-as-a-Service, where initial access brokers infiltrate networks and sell entry points to ransomware groups and other malicious actors. The shift from traditional malware to identity-based attacks is accelerating, with 79% of observed breaches relying on valid credentials and remote administration tools instead of malicious software. Attackers are also moving faster than ever. Breakout times—the speed at which cybercriminals move laterally within a network after breaching it—have hit a record low of just 48 minutes, with the fastest observed attack spreading in just 51 seconds. 

This efficiency is fueled by AI-driven automation, making intrusions more effective and harder to detect. AI has also revolutionized social engineering. AI-generated phishing emails now have a 54% click-through rate, compared to just 12% for human-written ones. Deepfake technology is being used to execute business email compromise scams, such as a $25.6 million fraud involving an AI-generated video. In a more alarming development, North Korean hackers have used AI to create fake LinkedIn profiles and manipulate job interviews, gaining insider access to corporate networks. 

The rise of AI in cybercrime is mirrored by the increasing sophistication of nation-state cyber operations. China, in particular, has expanded its offensive capabilities, with a 150% increase in cyber activity targeting finance, manufacturing, and media sectors. Groups like Vanguard Panda are embedding themselves within critical infrastructure networks, potentially preparing for geopolitical conflicts. 

As traditional perimeter security becomes obsolete, organizations must shift to identity-focused protection strategies. Cybercriminals are exploiting cloud vulnerabilities, leading to a 35% rise in cloud intrusions, while access broker activity has surged by 50%, demonstrating the growing value of stolen credentials. 

To combat these evolving threats, enterprises must adopt new security measures. Continuous identity monitoring, AI-driven threat detection, and cross-domain visibility are now critical. As cyber adversaries continue to innovate, businesses must stay ahead—or risk becoming the next target in this rapidly evolving digital battlefield.

DeepSeek AI Raises Data Security Concerns Amid Ties to China

 

The launch of DeepSeek AI has created waves in the tech world, offering powerful artificial intelligence models at a fraction of the cost compared to established players like OpenAI and Google. 

However, its rapid rise in popularity has also sparked serious concerns about data security, with critics drawing comparisons to TikTok and its ties to China. Government officials and cybersecurity experts warn that the open-source AI assistant could pose a significant risk to American users. 

On Thursday, two U.S. lawmakers announced plans to introduce legislation banning DeepSeek from all government devices, citing fears that the Chinese Communist Party (CCP) could access sensitive data collected by the app. This move follows similar actions in Australia and several U.S. states, with New York recently enacting a statewide ban on government systems. 

The growing concern stems from China’s data laws, which require companies to share user information with the government upon request. Like TikTok, DeepSeek’s data could be mined for intelligence purposes or even used to push disinformation campaigns. Although the AI app is the current focus of security conversations, experts say that the risks extend beyond any single model, and users should exercise caution with all AI systems. 

Unlike social media platforms that users can consciously avoid, AI models like DeepSeek are more difficult to track. Dimitri Sirota, CEO of BigID, a cybersecurity company specializing in AI security compliance, points out that many companies already use multiple AI models, often switching between them without users’ knowledge. This fluidity makes it challenging to control where sensitive data might end up. 

Kelcey Morgan, senior manager of product management at Rapid7, emphasizes that businesses and individuals should take a broad approach to AI security. Instead of focusing solely on DeepSeek, companies should develop comprehensive practices to protect their data, regardless of the latest AI trend. The potential for China to use DeepSeek’s data for intelligence is not far-fetched, according to cybersecurity experts. 

With significant computing power and data processing capabilities, the CCP could combine information from multiple sources to create detailed profiles of American users. Though this might not seem urgent now, experts warn that today’s young, casual users could grow into influential figures worth targeting in the future. 

To stay safe, experts advise treating AI interactions with the same caution as any online activity. Users should avoid sharing sensitive information, be skeptical of unusual questions, and thoroughly review an app’s terms and conditions. Ultimately, staying informed and vigilant about where and how data is shared will be critical as AI technologies continue to evolve and become more integrated into everyday life.

Nearly Half of Security Experts Believe AI is Risky

 

AI is viewed by 48% of security experts as a major security threat to their organisation, according to a new HackerOne security research platform survey of 500 security professionals. 

Their main worries about AI include the following: 

  • Leaked training data (35%)
  • Unauthorized usage (33%)
  • The hacking of AI models by outsiders (32%) 

These concerns emphasise how vital it is for businesses to review their AI security plans in order to address shortcomings before it becomes a major issue. 

While the full Hacker Powered Security Report will not be available until later this fall, further study from a HackerOne-sponsored SANS Institute report disclosed that 58% of security experts believe that security teams and threat actors could be in a "arms race" to use generative AI tactics and techniques in their work. 

According to the SANS poll, 71% of security professionals have successfully used AI to automate routine jobs. However, the same participants admitted that threat actors could employ AI to improve their operations' efficiency. Specifically, the participants "were most concerned with AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%).” 

“Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations — or risk creating more work for themselves,” Matt Bromiley, an analyst at the SANS Institute, stated in a press release. 

So what is the solution? External assessment of AI implementations is advised. More than two-thirds of those polled (68%) said "external review" is the most effective technique to identify AI safety and security risks.

“Teams are now more realistic about AI’s current limitations” than they were last year, noted HackerOne Senior Solutions Architect Dane Sherrets. “Humans bring a lot of important context to both defensive and offensive security that AI can’t replicate quite yet. Problems like hallucinations have also made teams hesitant to deploy the technology in critical systems. However, AI is still great for increasing productivity and performing tasks that don’t require deep context.”

Telus Makes History with ISO Privacy Certification in AI Era

Telus, a prominent telecoms provider, has accomplished a significant milestone by obtaining the prestigious ISO Privacy by Design certification. This certification represents a critical turning point in the business's dedication to prioritizing privacy. The accomplishment demonstrates Telus' commitment to implementing industry-leading data protection best practices and can be seen as a new benchmark.

Privacy by Design, a concept introduced by Dr. Ann Cavoukian, emphasizes the integration of privacy considerations into the design and development of technologies. Telus' attainment of this certification showcases the company's proactive approach to safeguarding user information in an era where digital privacy is a growing concern.

Telus' commitment to privacy aligns with the broader context of technological advancements and their impact on personal data. As artificial intelligence (AI) continues to shape various industries, privacy concerns have become more pronounced. The intersection of AI and privacy is critical for companies to navigate responsibly.

The realization that AI technologies sometimes entail the processing of enormous volumes of sensitive data highlights the significance of this intersection. Telus's acquisition of the ISO Privacy by Design certification becomes particularly significant in the current digital context when privacy infractions and data breaches frequently make news.

In an era where data is often referred to as the new currency, the need for robust privacy measures cannot be overstated. Telus' proactive stance not only meets regulatory requirements but also sets a precedent for other companies to prioritize privacy in their operations.

Dr. Ann Cavoukian, the author of Privacy by Design, says that "integrating privacy into the design process is not only vital but also feasible and economical. It is privacy plus security, not privacy or security alone."

Privacy presents both opportunities and concerns as technology advances. Telus' certification is a shining example for the sector, indicating that privacy needs to be integrated into technology development from the ground up.

The achievement of ISO Privacy by Design certification by Telus represents a turning point in the ongoing conversation about privacy and technology. The proactive approach adopted by the organization not only guarantees adherence to industry norms but also serves as a noteworthy model for others to emulate. Privacy will continue to be a key component of responsible and ethical innovation as AI continues to change the digital landscape.


Apple Co-founder Says AI Could Make Cyber Scams ‘Harder to Spot’


Apple co-founder Steve Wozniak recently cautioned that artificial intelligence (AI) could result in making cyber scams and misinformation more challenging to recognize. 

Speaking to BBC, he further notes that technology may as well be harnessed by “bad actors.” According to Mr. Wozniak, AI contents should well-labelled, and also highlighted the need for proper regulation in the industry. 

In March, Apple, along with Meta CEO Elon Musk signed a letter, urging a halt to the development of more potent AI models. 

Mr. Wozniak, also referred to as Woz in the tech community, is a seasoned veteran of Silicon Valley who co-founded Apple with Steve Jobs and created the company's first computer./ In an interview with BBC Technology Editor Zoe Kleinman, he discussed his fears as well as the advantages of artificial intelligence.

"AI is so intelligent it's open to the bad players, the ones that want to trick you about who they are," said Kleinman. 

AI refers to computer programs that can perform tasks that would typically require human intelligence. This includes systems that can identify objects in images and chatbots that can comprehend queries and provide responses that seem human.

While Mr. Wozniak ardently believes that AI will not be replaying humans, since it lacks emotions. However, he warns against bad actors, since AI is making them more realistic, one example being generative AI ChatGPT that can carve texts which sounds human and “intelligent.”

A Human Really has to Take the Responsibility 

Wozniak believes that any product of the artificial intelligence be held accountable for those who publish it. "A human really has to take the responsibility for what is generated by AI," he says. 

The large tech companies that "feel they can kind of get away with anything" should be held accountable by regulations, according to him.

Yet he expressed doubt that authorities would make the correct decisions, saying, "I think the forces that drive for money usually win out, which is sort of sad."

Technology cannot be Stopped 

Mr. Wozniak, a computer pioneer, believes that those developing artificial intelligence now might learn from the chances lost during the early stages of the internet. Although "we can't stop the technology," in his opinion, we can teach individuals to recognize fraud and other nefarious attempts to obtain personal information.

Last week, the current CEO of Apple, Tim Cook told investors that is crucial to be “deliberate and thoughtful,” is a way to approach AI. "We view AI as huge, and we'll continue weaving it in our products on a very thoughtful basis," he said.  

Shadow AI: The Novel, Unseen Threat to Your Company's Data

 

Earlier this year, ChatGPT emerged as the face of generative AI. ChatGPT was designed to help with almost everything, from creating business plans to breaking down complex topics into simple terms. Since then, businesses of all sizes have been eager to explore and reap the benefits of generative AI. 

However, as this new chapter of AI innovation moves at breakneck speed, CEOs and leaders risk overlooking a type of technology that has been infiltrating through the back door: shadow AI. 

Overlooking AI shadow a risky option 

To put it simply, "shadow AI" refers to employees who, without management awareness, add AI tools to their work systems to make life easier. Although most of the time this pursuit of efficiency is well-intentioned, it is exposing businesses to new cybersecurity and data privacy risks.

When it comes to navigating tedious tasks or laborious processes, employees who want to increase productivity and process efficiency are usually the ones who embrace shadow AI. This could imply that AI is being asked to summarise the main ideas from meeting minutes or to comb through hundreds of PowerPoint decks in search of critical data. 

Employees typically don't intentionally expose their company to risk. On the contrary. All they're doing is simplifying things so they can cross more things off their to-do list. However, given that over a million adults in the United Kingdom have already utilised generative AI at work, there is a chance that an increasing number of employees will use models that their employers have not approved for safe use, endangering data security in the process. 

Major risks 

Shadow AI carries two risks. First, employees may feed sensitive company information into such tools or leave sensitive company information open to be scraped while the technology continues to operate in the background. For example, when an employee uses ChatGPT or Google Bard to increase productivity or clarify information, they may be entering sensitive or confidential company information. 

Sharing data isn't always an issue—companies frequently rely on third-party tools and service providers for information—but problems can arise when the tool in question and its data-handling policies haven't been assessed and approved by the business. 

The second risk related to shadow AI is that, because businesses generally aren't aware that these tools are being used, they can't assess the risks or take appropriate action to minimise them. (This may also apply to employees who receive false information and subsequently use it in their work.) 

This is something that occurs behind closed doors and beyond the knowledge of business leaders. In 2022, 41% of employees created, modified, or accessed technology outside of IT's purview, according to research from Gartner. By 2027, the figure is expected to increase to 75%. 

And therein lies the crux of the issue. How can organisations monitor and assess the risks of something they don't understand? 

Some companies, such as Samsung, have gone so far as to ban ChatGPT from their offices after employees uploaded proprietary source code and leaked confidential company information via the public platform. Apple and JP Morgan have also restricted employee use of ChatGPT. Others are burying their heads in the sand or failing to notice the problem at all. 

What should business leaders do to mitigate the risks of shadow AI while also ensuring that they and their teams can benefit from the efficiencies and insights that artificial intelligence can offer? 

First, leaders should educate teams on what constitutes safe AI practise, as well as the risks associated with shadow AI, and provide clear guidance on when ChatGPT can and cannot be used safely at work. 

Companies should consider offering private, in-house generative AI tools to employees who fall into the latter category. Models such as Llama 2 and Falcon AI can be downloaded and used securely to power generative AI tools. Azure Open AI provides a middle-ground option in which data remains within the company's Microsoft "tenancy." 

These options avoid the risk to data and IP that comes with public large language models like ChatGPT—whose various uses of our data aren't yet known—while allowing employees to yield the results they desire.