Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Artificial Intelligence. Show all posts

Cerebras Unveils World’s Fastest AI Chip, Beating Nvidia in Inference Speed

 

In a move that could redefine AI infrastructure, Cerebras Systems showcased its record-breaking Wafer Scale Engine (WSE) chip at Web Summit Vancouver, claiming it now holds the title of the world’s fastest AI inference engine. 

Roughly the size of a dinner plate, the latest WSE chip spans 8.5 inches (22 cm) per side and packs an astonishing 4 trillion transistors — a monumental leap from traditional processors like Intel’s Core i9 (33.5 billion transistors) or Apple’s M2 Max (67 billion). 

The result? A groundbreaking 2,500 tokens per second on Meta’s Llama 4 model, nearly 2.5 times faster than Nvidia’s recently announced benchmark of 1,000 tokens per second. “Inference is where speed matters the most,” said Naor Penso, Chief Information Security Officer at Cerebras. “Last week Nvidia hit 1,000 tokens per second — which is impressive — but today, we’ve surpassed that with 2,500 tokens per second.” 

Inference refers to how AI processes information to generate outputs like text, images, or decisions. Tokens, which can be words or characters, represent the basic units AI uses to interpret and respond. As AI agents take on more complex, multi-step tasks, inference speed becomes increasingly essential. “Agents need to break large tasks into dozens of sub-tasks and communicate between them quickly,” Penso explained. “Slow inference disrupts that entire flow.” 

What sets Cerebras apart isn’t just transistor count — it’s the chip’s design. Unlike Nvidia GPUs that require off-chip memory access, WSE integrates 44GB of high-speed RAM directly on-chip, ensuring ultra-fast data access and reduced latency. Independent benchmarks back Cerebras’ claims. 

Artificial Analysis, a third-party testing agency, confirmed the WSE achieved 2,522 tokens per second on Llama 4, outperforming Nvidia’s new Blackwell GPU (1,038 tokens/sec). “Cerebras is the only inference solution that currently outpaces Blackwell for Meta’s flagship model,” said Artificial Analysis CEO Micah Hill-Smith. 

While CPUs and GPUs have driven AI advancements for decades, Cerebras’ WSE represents a shift toward a new compute paradigm. “This isn’t x86 or ARM, It’s a new architecture designed to supercharge AI workloads,” said Julie Shin, Chief Marketing Officer at Cerebras.

Technology Meets Therapy as AI Enters the Conversation

 


Several studies show that artificial intelligence has become an integral part of mental health care, changing the way practitioners deliver, document, and conceptualise therapy over the years, as well as how professionals are implementing, documenting, and even conceptualising it. Psychiatrists associated with the American Psychiatric Association were found to be increasingly relying on artificial intelligence tools such as ChatGPT, according to a 2023 study. 

In general, 44% of respondents reported that they were using the language model version 3.5, and 33% had been trying out version 4.0, which is mainly used to answer clinical questions. The study also found that 70% of people surveyed believe that AI improves or has the potential to improve the efficiency of clinical documentation. The results of a separate study conducted by PsychologyJobs.com indicated that one in four psychologists had already begun integrating artificial intelligence into their practice, and another 20% were considering the idea of adopting the technology soon. 

AI-powered chatbots for client communication, automated diagnostics to support advanced treatment planning and natural language processing tools to analyse text data from patients were among the most common applications. As both studies pointed out, even though the enthusiasm for artificial intelligence is growing, there has also been a concern raised about the ethical, practical, and emotional implications of incorporating it into therapeutic settings, which has been expressed by many mental health professionals. 

Therapy has traditionally been viewed as an extremely personal process that involves introspection, emotional healing, and gradual self-awareness as part of a process that is deeply personal. Individuals are provided with a structured, empathetic environment in which they can explore their beliefs, behaviours, and thoughts with the assistance of a professional. However, the advent of artificial intelligence, which is beginning to reshape the contours of this experience, is changing the shape of this experience.

It has now become apparent that ChatGPT is positioned as a complementary support in the therapeutic journey, providing continuity between sessions and enabling clients to work on their emotional work outside the therapy room. The inclusion of these tools ethically and thoughtfully can enhance therapeutic outcomes when they are implemented in a manner that reinforces key insights, encourages consistent reflection, and provides prompts that are aligned with the themes explored during formal sessions. 

It is important to understand that the most valuable contribution AI has to offer in this context is that it is able to facilitate insight, enabling users to gain a clearer understanding of how people behave and feel. The concept of insight refers to the ability to move beyond superficial awareness into the identification of psychological problems that arise from psychological conditions. 

One way to recognise one's tendency to withdraw during times of conflict is to recognise that it is a fear of emotional vulnerability rooted in past experiences, so understanding that this is a deeper level of self-awareness that can change life. This sort of breakthrough may often happen during therapy sessions, but it often evolves and crystallises outside the session, as a client revisits a discussion with their therapist or is confronted with a situation in their daily lives that brings new clarity to them. 

AI tools can be an effective companion in these moments. This therapy extends the therapeutic process beyond the confines of scheduled appointments by providing reflective dialogue, gentle questioning, and cognitive reframing techniques to help individuals connect the dots. It is widely understood that the term "AI therapy" entails a range of technology-driven approaches that aim to enhance or support the delivery of mental health care. 

At its essence, it refers to the application of artificial intelligence in therapeutic contexts, with tools designed to support licensed clinicians, as well as fully autonomous platforms that interact directly with their users. It is commonly understood that artificial intelligence-assisted therapy augments the work of human therapists with features such as chatbots that assist clients in practicing coping mechanisms, mood monitoring software that can be used to monitor mood patterns over time, and data analytics tools that provide clinicians with a better understanding of the behavior of their clients and the progression of their treatment.

In order to optimise and personalise the therapeutic process, these technologies are not meant to replace mental health professionals, but rather to empower them. On the other hand, full-service AI-driven interventions represent a more self-sufficient model of care in which users can interact directly with digital platforms without any interaction with a human therapist, leading to a more independent model of care. 

Through sophisticated algorithms, these systems will be able to deliver guided cognitivbehaviouralal therapy (CBT) exercises, mindfulness practices, or structured journaling prompts tailored to fit the user's individual needs. Whether AI-based therapy is assisted or autonomous, AI-based therapy has a number of advantages, including the potential to make mental health support more accessible and affordable for individuals and families. 

There are many reasons why traditional therapy is out of reach, including high costs, long wait lists, and a shortage of licensed professionals, especially in rural areas or areas that are underserved. Several logistical and financial barriers can be eliminated from the healthcare system by using AI solutions to offer care through mobile apps and virtual platforms.

It is essential to note that these tools may not completely replace human therapists when dealing with complex or crisis situations, but they significantly increase the accessibility of psychological care, enabling individuals to seek help despite facing an otherwise insurmountable barrier to accessing it. Since the advent of increased awareness of mental health, reduced stigma, and the psychological toll of global crises, the demand for mental health services has increased dramatically in recent years. 

Nevertheless, there has not been an adequate number of qualified mental health professionals available, which has left millions of people with inadequate mental health care. As part of this context, artificial intelligence has emerged as a powerful tool in bridging the gap between need and accessibility. With the capability of enhancing clinicians' work as well as streamlining key processes, artificial intelligence has the potential to significantly expand mental health systems' capacity in the world. This concept, which was once thought to be futuristic, is now becoming a practical reality. 

There is no doubt that artificial intelligence technologies are already transforming clinical workflows and therapeutic approaches, according to trends reported by the American Psychological Association Monitor. AI is changing how mental healthcare is delivered at every stage of the process, from intelligent chatbots to algorithms that automate administrative tasks, so that every stage of the process can be transformed by it. 

A therapist who integrates AI into his/her practice can not only increase efficiency, but they can also improve the quality and consistency of the care they provide their patients with The current AI toolbox offers a wide range of applications that will support both clinical and operational functions of a therapist: 

1. Assessment and Screening

It has been determined that advanced natural language processing models are being used to analyse patient speech and written communications to identify early signs of psychological distress, including suicidal ideation, severe mood fluctuations, or trauma-related triggers that may indicate psychological distress. It is possible to prevent crises before they escalate by utilising these tools, which facilitate early detection and timely intervention. 

2. Intervention and Self-Help

With the help of artificial intelligence-powered chatbots built around cognitive behavioural therapy (CBT) frameworks, users can access structured mental health support at their convenience, anytime, anywhere. There is a growing body of research that suggests that these interventions can result in measurable reductions in the symptoms of depression, particularly major depressive disorder (MDD), often serving as an effective alternative to conventional treatment in treating such conditions. Recent randomised controlled trials support this claim. 

3. Administrative Support 

Several tasks, often a burden and time-consuming part of clinical work, are being streamlined through the use of AI tools, including drafting progress notes, assisting with diagnostic coding, and managing insurance pre-authorisation requests. As a result of these efficiencies, clinician workload and burnout are reduced, which leads to more time and energy available to care for patients.

4. Training and Supervision 

The creation of standardised patients by artificial intelligence offers a revolutionary approach to clinical training. In a controlled environment, these realistic virtual clients provide therapists who are in training the opportunity to practice therapeutic techniques. Additionally, AI-based analytics can be used to evaluate session quality and provide constructive feedback to clinicians, helping them improve their skills and improve their overall treatment outcomes.

Artificial intelligence is continuously evolving, and mental health professionals must stay on top of its developments, evaluate its clinical validity, and consider the ethical implications of their use as it continues to evolve. Using AI properly can serve as a support system and a catalyst for innovation, ultimately leading to a greater reach and effectiveness of modern mental healthcare services. 

As artificial intelligence (AI) is becoming increasingly popular in the field of mental health, talk therapy powered by artificial intelligence is a significant innovation that offers practical, accessible support to individuals dealing with common psychological challenges like anxiety, depression, and stress. These systems are based on interactive platforms and mobile apps, and they offer personalized coping strategies, mood tracking, and guided therapeutic exercises via interactive platforms and mobile apps. 

In addition to promoting continuity of care, AI tools also assist individuals in maintaining therapeutic momentum between sessions, instead of traditional services, when access to traditional services is limited, by allowing them to access support on demand. As a result, AI interventions are more and more considered complementary to traditional psychotherapy, rather than replacing it altogether. These systems combine evidence-based techniques with those of cognitive behavioural therapy (CBT) and dialectical behaviour therapy (DBT) to provide evidence-based techniques.

With the development of these techniques into digital formats, users can engage with strategies aimed at regulating emotions, reframing cognitive events, and engaging in behavioural activation in real-time. These tools have been designed to be immediately action-oriented, enabling users to apply therapeutic principles directly to real-life situations as they arise, resulting in greater self-awareness and resilience as a result. 

A person who is dealing with social anxiety, for example, can use an artificial intelligence (AI) simulation to gradually practice social interactions in a low-pressure environment, thereby building their confidence in these situations. As well, individuals who are experiencing acute stress can benefit from being able to access mindfulness prompts and reminders that will help them regain focus and ground themselves. This is a set of tools that are developed based on the clinical expertise of mental health professionals, but are designed to be integrated into everyday life, providing a scalable extension of traditional care models.

However, while AI is being increasingly utilised in therapy, it is not without significant challenges and limitations. One of the most commonly cited concerns is that there is no real sense of human interaction with the patient. The foundations of effective psychotherapy include empathy, intuition, and emotional nuance, qualities which artificial intelligence is unable to fully replicate, despite advances in natural language processing and sentiment analysis. 

AI interactions can be deemed impersonal or insufficient by users seeking deeper relational support, leading to feelings of isolation or dissatisfaction in the user. Additionally, AI systems may be unable to interpret complex emotions or cultural nuances, so their responses may not have the appropriate sensitivity or relevance to offer meaningful support.

In the field of mental health applications, privacy is another major concern that needs to be addressed. These applications frequently handle highly sensitive data about their users, which makes data security an extremely important issue. Because of concerns over how their personal data is stored, managed, or possibly shared with third parties, users may not be willing to interact with these platforms. 

As a result of the high level of transparency and encryption that developers and providers of AI therapy must maintain in order to gain widespread trust and legitimacy, they must also comply with privacy laws like HIPAA or GDPR to maintain a high level of legitimacy and trust. 

Additionally, ethical concerns arise when algorithms are used to make decisions in deeply personal areas. As a result of the use of artificial intelligence, biases can be reinforced unintentionally, complex issues can be oversimplified, and standardised advice is provided that doesn't reflect the unique context of each individual. 

In an industry that places a high value on personalisation, it is especially dangerous that generic or inappropriate responses occur. For AI therapy to be ethically sound, it must have rigorous oversight, continuous evaluation of system outputs, as well as clear guidelines to govern the proper use and limitations of these technologies. In the end, while AI presents several promising tools for extending mental health care, its success depends upon its implementation, in which innovation, accuracy, and respect for individual experience are balanced with compassion, accuracy, and respect for individuality. 

When artificial intelligence is being incorporated into mental health care at an increasing pace, it is imperative that mental health professionals, policy makers, developers, and educators work together to create a framework to ensure that the application is conducted responsibly. It is not enough to have technological advances in the field of AI therapy to ensure its future, but it is also important to have a commitment to ethical responsibility, clinical integrity, and human-centred care in the industry. 

A major part of ensuring that AI solutions are both safe and therapeutically meaningful will be robust research, inclusive algorithm development, and extensive clinician training. Furthermore, it is critical to maintain transparency with users regarding the capabilities and limitations of these tools so that individuals can make informed decisions regarding their mental health care. 

These organisations and practitioners who wish to remain at the forefront of innovation should prioritise strategic implementation, where AI is not viewed as a replacement but rather as a valuable partner in care rather than merely as a replacement. Considering the potential benefits of integrating innovation with empathy in the mental health sector, people can make use of AI's full potential to design a more accessible, efficient, and personalised future of therapy-one in which technology amplifies the importance of human connection rather than taking it away.

Google Unveils AI With Deep Reasoning and Creative Video Capabilities

 


This week, Google, as part of its annual Google Marketing Live 2025 event, unveiled a comprehensive suite of artificial intelligence-powered tools to help the company cement its position at the forefront of digital commerce and advertising on Wednesday, May 21, at a press conference.

Google's new tools are intended to revolutionise the way brands engage with consumers and drive measurable growth through artificial intelligence, and they are part of a strategic push that Google is making to redefine the future of advertising and online shopping. In her presentation, Vidhya Srinivasan, Vice President and General Manager of Google Ads and Commerce, stressed the importance of this change, saying, “The future of advertising is already here, fueled by artificial intelligence.” 

This declaration was followed by Google's announcement of advanced solutions that will enable businesses to use smarter bidding, dynamic creative creation, and intelligent, agent-based assistants in real-time, which can adjust to user behaviour and market conditions, as well as adapt to changing market conditions. Google has launched this major product at a critical time in its history, as generative AI platforms and conversational search tools are putting unprecedented pressure on traditional search and shopping channels, diverting users away from these methods. 

By leveraging technological disruptions as an opportunity for brands and marketers around the world, Google underscores its commitment to staying ahead of the curve by creating innovation-driven opportunities for brands and marketers. A long time ago, Google began to explore artificial intelligence, and since its inception in 1998, it has evolved steadily. Google’s journey into artificial intelligence dates back much earlier than many people think. 

While Google has always been known for its groundbreaking PageRank algorithm, its formal commitment to artificial intelligence accelerated throughout the mid-2000s when key milestones like the acquisition of Pyra Labs in 2003 and the launch of Google Translate in 2006 were key milestones. It is these early efforts that laid the foundation for analysing content and translating it using AI. It was not long before Google Instant was introduced in 2010 as an example of how predictive algorithms were enhancing user experience by providing real-time search query suggestions. 

In the years that followed, artificial intelligence research and innovation became increasingly important, as evidenced by Google X's establishment in 2011 and DeepMind's strategic acquisition in 2014, pioneers in reinforcement learning that created the historic algorithm AlphaGo. A new wave of artificial intelligence has been sweeping across the globe since 2016 with Google Assistant and advanced tools like TensorFlow, which have democratized machine learning development. 

Breakthroughs such as Duplex have highlighted AI's increasing conversational sophistication, but most recently, Google's AI has embraced multimodal capabilities, which is why models like BERT, LaMDA, and PaLM are revolutionising language understanding and dialogue in a way previously unknown to the world. AI has a rich legacy that underscores its crucial role in driving Google’s transformation across search, creativity, and business solutions, underpinned by this legacy. 

As part of its annual developer conference in 2025, Google I/O reaffirmed its leadership in the rapidly developing field of artificial intelligence by unveiling an impressive lineup of innovations that promise to revolutionize the way people interact with technology, reaffirming its leadership in this field. In addition to putting a heavy emphasis on artificial intelligence-driven transformation, this year's event showcased next-generation models and tools that are far superior to the ones displayed in previous years. 

Among the announcements made by AI are the addition of AI assistants with deeper contextual intelligence, to the creation of entire videos with dialogue, which highlights a monumental leap forward in both the creative and cognitive capabilities of AI in general. It was this technological display that was most highlighted by the unveiling of Gemini 2.5, Google's most advanced artificial intelligence model. This model is positioned as the flagship model of the Gemini series, setting new industry standards for outstanding performance across key dimensions, such as reasoning, speed, and contextual awareness, which is among the most important elements of the model. 

The Gemini 2.5 model has outperformed its predecessors and rivals, including Google's own Gemini Flash, which has redefined expectations for what artificial intelligence can do. Among the model's most significant advantages is its enhanced problem-solving ability, which makes it far more than just a tool for retrieving information; it is also a true cognitive assistant because it provides precise, contextually-aware responses to complex and layered queries. 

 It has significantly enhanced capabilities, but it operates at a faster pace and with better efficiency, which makes it easier to integrate into real-time applications, from customer support to high-level planning tools, seamlessly. Additionally, the model's advanced understanding of contextual cues allows it to conduct intelligent, more coherent conversations, allowing it to feel more like a human being collaborating rather than interacting with a machine. This development marks a paradigm shift in artificial intelligence in addition to incremental improvements. 

It is a sign that artificial intelligence is moving toward a point where systems are capable of reasoning, adapting, and contributing in meaningful ways across the creative, technical, and commercial spheres. Google I/O 2025 serves as a preview of a future where AI will become an integral part of productivity, innovation, and experience design for digital creators, businesses, and developers alike. 

Google has announced that it is adding major improvements to its Gemini large language model lineup, which marks another major step forward in Google's quest to develop more powerful, adaptive artificial intelligence systems, building on the momentum of its breakthroughs in artificial intelligence. The new iterations, Gemini 2.5 Flash and Gemini 2.5 Pro, feature significant architectural improvements that aim to optimise performance across a wide range of uses. 

It will be available in early June 2025 in general availability as Gemini 2.5 Flash, a fast and lightweight processor designed for high-speed and lightweight use, and the more advanced Pro version will appear shortly afterwards as well. Among the most notable features of the Pro model is the introduction of “Deep Think” which provides advanced reasoning techniques to handle complex tasks using parallel processing techniques to handle complex issues. 

As a result of its inspiration from AlphaGo's strategic modelling, Deep Think gives AI the ability to simultaneously explore various solution paths, producing faster and more accurate results. With this capability, the model is well-positioned to offer a cutting-edge solution for reasoning at the highest level, mathematical analysis, and programming that meets the demands of competition. When Demiss Hassabis, CEO of Google DeepMind, held a press briefing to highlight the model's breakthrough performance, he highlighted its impressive performance on the USAMO 2025, a challenging math challenge that is a challenging one in the world, and LiveCodeBench, another benchmark that is a popular one in advanced coding.

A statement by Hassabis said, “Deep Think pushed the performance of models to the limit, resulting in groundbreaking results.” Google is adopting a cautious release strategy to comply with its commitment to ethical AI deployment. In order to ensure safety, reliability, and transparency, Deep Think will initially be accessible only to a limited number of trusted testers who will be able to provide feedback. 

In addition to demonstrating Google's intent to responsibly scale frontier AI capabilities, this deliberate rollout emphasises the importance of maintaining trust and control while showcasing the company's commitment to it. In addition to its creative AI capabilities, Google announced two powerful models for generative media during its latest announcements: Veo 3 for video generation and Imagen 4. These models represent significant breakthroughs in generative media technology. 

There has been a shift in artificial intelligence-assisted content creation in recent years, and these innovations provide creators with a much deeper, more immersive toolkit that allows them to tell visual and audio stories in a way that is truly remarkable in terms of realism and precision. Veo 3 represents a transformative leap in video generation technology, and for the first time, artificial intelligence-generated videos do not only comprise silent, motion-only clips anymore, but also provide a wide range of visual effects and effects. 

As a result of the integration of fully synchronised audio with Veo 3, the experience felt more like a real cinematic production than a simple algorithmic output, with ambient sounds, sound effects, and even real-time dialogue between characters, as it was in the original film. "For the first time in history, we are entering into a new era of video creation," said Demis Hassabis, CEO of Google DeepMind, highlighting how both the visual fidelity and the auditory depth of the new model were highlighted. As a result of these breakthroughs, Google has developed Flow, a new AI-powered filmmaking platform exclusively for creative professionals, which integrates these breakthroughs into Flow. 

Flow is Google's latest generative modelling tool that combines the most advanced models into an intuitive interface, so storytellers can design cinematic sequences with greater ease and fluidity than ever before. In Flow, the company claims it will recreate the intuitive, inspired creative process, where iteration feels effortless and ideas evolve in a way that is effortless and effortless. Flow has already been used by several filmmakers to create short films that illustrate the creative potential of the technology, combining Flow's capabilities with traditional methods to create the films.

Additionally, Imagen 4 is the latest update to Google's image generation model, offering extraordinary improvements in visual clarity, fine detail, and especially in typography and text rendering, as well as providing unparalleled advancements in visual clarity and fine detail. With these improvements, it has become a powerful tool for marketers, designers, and content creators who need to create beautiful visuals combining high-quality imagery with precise, readable text. 

The Imagen 4 platform is a significant step forward in advancing the quality of visual storytelling based on artificial intelligence, whether for branding, digital campaigns, or presentations. Despite fierce competition from leading technology companies, Google has made significant advancements in autonomous artificial intelligence agents at a time when the landscape of intelligent automation is rapidly evolving.

It is no secret that Microsoft's GitHub Copilot has already demonstrated how powerful AI-driven development assistants can be, but OpenAI's CodeX platform continues to push the boundaries of what AI has to offer. It is in this context that Google introduced innovative tools like Stitch and Jules that could generate a complete website, a codebase, and a user interface automatically without any human input. These tools signal a revolution in how software developers develop and create digital content. A convergence of autonomous artificial intelligence technologies from a variety of industry giants underscores a trend towards automating increasingly complex knowledge tasks. 

Through the use of these AI systemorganisationsons can respond quickly to changing market demands and evolving consumer preferences by providing real-time recommendations and dynamic adjustments. Through such responsiveness, an organisation is able to optimise operational efficiency, maximise resource utilisation, and create sustainable growth by ensuring that the company remains tightly aligned with its strategic goals. AI provides businesses with actionable insights that enable them to compete more effectively in an increasingly complex and fast-paced market place by providing actionable insights. 

Aside from software and business applications, Google's AI innovations also have great potential to have a dramatic impact on the healthcare sector, where advancements in diagnostic accuracy and personalised treatment planning have the potential to greatly improve the outcomes for patients. Furthermore, improvements in the field of natural language processing and multimodal interaction models will help provide more intuitive, accessible and useful user interfaces for users from diverse backgrounds, thus reducing barriers to adoption and enabling them to make the most of technology. 

In the future, when artificial intelligence becomes an integral part of today's everyday lives, its influence will be transformative, affecting industries, redefining workflows, and generating profound social effects. The fact that Google leads the way in this space not only implies a future where artificial intelligence will augment human capabilities, but it also signals the arrival of a new era of progress in science, economics, and culture as a whole.

Tech Executives Lead the Charge in Agentic AI Deployment

 


As it turns out, what was once considered a futuristic concept has quickly become a business imperative. As a result, artificial intelligence is now being integrated into the core of enterprise operations in increasingly autonomous ways - and it is doing so even though it had previously been confined to experimental pilot programs. 

In a survey conducted by global consulting firm Ernst & Young (EY), technology executives predicted that within two years, over half of their AI systems will be able to function autonomously. There is a significant milestone coming up in the evolution of artificial intelligence with this prediction, signalling a shift away from assistive technologies towards autonomous systems that can make decisions and execute goals independently. 

The generative AI field has dominated the innovation spotlight in recent years, captivating leaders with its ability to generate text, images, and insights similar to those of a human. However, a more advanced and less publicised form of artificial intelligence has emerged. A system of this kind not only responds, but is also capable of acting – either autonomously or semi-autonomously – in pursuit of specific objectives. 

Previously, agentic artificial intelligence was considered a fringe concept in the business dialogues of the West, but that changed dramatically in late 2024. The number of global searches for “agent AI” and “AI agents” has skyrocketed in recent months, reflecting a strong interest in the field both within the industry and within the public sphere. A significant evolution is taking place in the area of intelligent AI beyond traditional chatbots and prompt-based tools. 

Taking advantage of advances in large language models (LLMs) and the emergence of large reasoning models (LRMs), these intelligent systems are now capable of making autonomous, adaptive decision-making based on real-time reasoning in a way that moves beyond rule-based execution. With agentic AI systems, actions are adjusted according to context and goals, rather than following static, predefined instructions as in earlier software or pre-AI agents. 

The shift marks a new beginning for AI, in which systems no longer act as tools but as intelligent collaborators capable of navigating complexity in a manner that requires little human intervention. To capitalise on the emerging wave of autonomous systems, companies are having to rethink how work is completed, who (or what) performs it, as well as how leadership must adapt to use AI as a true collaborator in strategy execution. 

In today's technologically advanced world, artificial intelligence systems are becoming more active collaborators than passive tools in the workplace, and this represents a new era in workplace innovation. By 2027, Salesforce predicts a massive increase in the adoption of Agentic AI by an astounding 327%, which is a significant change for organisations, workforce strategies, and organisational structures. Despite the potential of the technology, the study finds that 85% of organisations have yet to integrate Agentic AI into their operations despite its promising potential. This transition is being driven by Chief Human Resource Officers (CHROs), who are taking the lead as strategic leaders in this process. 

The company is not only reviewing traditional HR models but also pushing ahead with initiatives focusing on realigning roles, forecasting skills, and promoting agile talent development. As organisations prepare for the deep changes that will be brought about by Agentic AI, human resources leaders must prepare their workforces for jobs that are unlikely to exist yet while managing the evolution of roles that already do exist. 

Salesforce's study examines how Agentic AI is transforming the future of work, reshaping employee responsibilities, and driving an increase in the need for reskilling, as well as the key findings. As an HR function, the responsibility of leading this technological shift with foresight, flexibility, and a renewed emphasis on human-centred innovation in the face of an AI-powered environment, and it is expected to lead by example. 

Technology giant Ernst & Young (EY) has recently released its Technology Pulse Poll, which shows that an increased sense of urgency and confidence among leading technology companies is shaping AI strategies. According to a survey conducted by over 500 technology executives, more than half of them predicted that artificial intelligence agents would constitute most of their future deployments, as they are autonomous or semi-autonomous systems that are capable of executing tasks with little or no human intervention. 

The data shows that there is a rise in self-contained, goal-oriented artificial intelligence solutions becoming integrated into business operations. Moreover, the data indicates that this shift has already begun to occur. There are about 48% of respondents who are either in the process of adopting or have already fully deployed AI agents across a range of different functions of their organisations. 

A significant number of these respondents expect that within the next 24 months, more than 50% of their AI deployments will operate autonomously. This widespread adoption is reflective of a growing belief that agentic AI can be an effective method for facilitating efficiency, agility, and innovation at an unprecedented scale. According to the survey, there is also a significant increase in investment in AI. 

As far as technology leaders are concerned, 92% said they plan to increase spending on AI initiatives, thus demonstrating how important AI is as a strategic priority. Furthermore, over half of these executives are confident that their companies are currently more prepared and ahead of their industry peers when it comes to investing in AI technologies and preparing for their use. Even though 81% of respondents expressed confidence that AI could help their organisations achieve key business objectives over the next year, the optimism regarding the technology's potential remains strong. 

There is an inflexion point that is being marked in these findings. With the advancement of agentic AI from exploration to execution, organisations are not only investing heavily in its development. Still, they are also integrating it into their day-to-day operations to enhance performance. Agentic AI will likely play an important role in the next wave of digital transformation, as it impacts productivity, decision-making, and competitive differentiation in profound ways. 

The more organisations learn about agentic artificial intelligence and the benefits it can provide over generative artificial intelligence, the clearer it becomes to differentiate itself. It is generally accepted that generational AI has excelled at creating content and summarising it, but agentic AI has set itself apart by proactively identifying problems, analysing anomalies, and giving actionable recommendations to solve those problems. It is much more powerful than simply listing a summary of how to fix a maintenance issue. 

An agentic AI system, for instance, will automatically detect the deviation from its defined range, issue an alert, suggest specific adjustments, and provide practical and contextualised guidance to users during the resolution process. By enabling intelligent, decision-oriented systems in place of passive AI outputs, a significant shift has been made toward intelligent AI outputs. It should be noted, however, that as enterprises move toward more autonomous operations, they also need to consider the architectural considerations associated with deploying agentic artificial intelligence - specifically, the choice between single-agent and multi-agent frameworks. 

When many businesses began implementing their first AI projects, they first adopted single-agent systems, where one AI agent manages a wide range of tasks at the same time. The single-agent systems, for example, could be used in a manufacturing setting for monitoring the performance of machines, predicting failures, analysing historical maintenance data and suggesting interventions. The fact is that while such systems may be able to handle complex tasks with layered questioning and analysis, they are often limited by their scalability. 

When a single agent is overwhelmed by a large amount and variety of data, he or she may be unable to perform as well as they should, or even exhibit hallucinations—false and inaccurate outputs which may compromise operational reliability. As a result, multi-agent systems are gaining popularity. These architectures are defined by assigning agents specific tasks and data sources, allowing them each to specialise in a specific area of data collection. 

In particular, a machine efficiency monitoring agent might track system logs, a system log monitoring agent might track historical downtime trends, while another agent might monitor machine efficiency metrics. A coordination agent can be used to direct the efforts of these agents and aggregate their findings into a comprehensive response, which can work independently or in coordination with the orchestration agent. 

In addition to enhancing the accuracy of each agent, the modular design ensures that the entire system is still scalable and resilient under complex workloads, allowing for the optimal performance of the system in general. Multi-agent systems are often a natural progression for organisations already utilising AI tools and data infrastructure. For businesses to extract greater value from their prior investments, existing machine learning models, data streams, and historical records can be aligned with specific agents designed for specific purposes. 

Additionally, these agents can work together dynamically, consulting on each other's behalf, utilising predictive models, and responding to evolving situations in real-time. With this evolving architecture, companies can design AI ecosystems that can handle the increasing complexity of modern digital operations in an adaptive, efficient, and capable manner. 

With artificial intelligence agents becoming increasingly integrated into enterprise security operations, Indian organisations are taking steps proactively to address both new opportunities and emerging risks to mitigate them. It has been reported that 83% of Indian firms have planned to increase security spending in the upcoming year because of data poisoning, a growing concern that involves attackers compromising AI training datasets. 

As well as the increase in AI agents used by IT security teams, this number is predicted to increase from 43% today to 76% within two years. These intelligent systems are currently being utilised for various purposes, including detecting threats, auditing AI models, and maintaining compliance with regulatory requirements. Even though 81% of cybersecurity leaders recognise AI agents as being beneficial for enhancing privacy compliance, 87% also admit that they introduce regulatory challenges as well. 

Trust remains a critical barrier, with 48% of leaders not knowing if their organisations are using high-quality data or if the necessary safeguards have been put in place to protect it. There are still significant regulatory uncertainties and gaps in data governance that hinder full-scale adoption of AI, with only 55% of companies confident they can deploy AI responsibly. 

A strategic and measured approach is imperative as organisations continue to embrace agentic AI to achieve greater efficiency, innovation, and competitive advantage. While businesses can benefit from the increased efficiency, innovation, and competitive advantage that this technology offers, the importance of establishing robust governance frameworks is also no less crucial than ensuring that AI is deployed ethically and responsibly. 

To mitigate challenges like data poisoning and regulatory compliance complexities, companies must invest in comprehensive data quality assurance, transparency mechanisms, and ongoing risk management methods to mitigate challenges such as data poisoning. Achieving cross-functional cooperation between IT, security, and human resources will also be vital for the alignment of AI initiatives with the broader organisational goals as well as the transformation of the workforce. 

Leaders must stress the importance of constant workforce upskilling to prepare employees for increasingly autonomous roles. Managing innovation with accountability can ensure businesses can maximise the potential of agentic AI while preserving trust, compliance, and operational resilience as well. This thoughtful approach will not only accelerate AI adoption but it will also enable sustainable value creation in an increasingly artificially driven business environment.

India’s Cyber Scams Create International Turmoil

 


It has been reported that the number of high-value cyber fraud cases in India has increased dramatically in the financial year 2024, which has increased more than fourfold and has resulted in losses totalling more than $20 million, according to government reports. An increase of almost fourfold demonstrates the escalating threat cybercrime is posing in one of the fastest-growing economies of the world today. 

The rapid digitisation of Indian society has resulted in hundreds of millions of financial transactions taking place every day through mobile apps, UPI platforms, and online banking systems, making India an ideal target for sophisticated fraud networks. There are alarming signs that these scams are rapidly expanding in size and complexity, outpacing traditional enforcement and security measures at an alarming rate, according to experts. 

Even though the country has increasingly embraced digital finance, cyber infrastructure vulnerabilities pose a growing threat, not only to domestic users but also to the global cybersecurity community. In the year 2024, cybercrime is expected to undergo a major change, marked by an unprecedented surge in online scams fueled by the rapid integration of artificial intelligence (AI) into criminal operations, causing a dramatic rise in the number of cybercrime incidents in the next decade. 

It is becoming increasingly evident that the level of technology-enabled fraud has reached alarming levels, eroded public confidence in digital systems, and caused substantial financial damage to individuals across the country, indicating that the Indian Cyber Crime Coordination Centre (I4C) paints a grim picture for India — just in the first six months of this year, Indians lost more than $11,000 crore to cyber fraud. 

There are a staggering number of cybercrime complaints filed on the National Cyber Crime Reporting Portal every day, suggesting that the scale of the problem is even larger than what appears at first glance. According to these figures, daily losses of approximately $60 crore are being sustained, signalling a greater focus on cybersecurity enforcement and awareness among the general public. These scams have become more sophisticated and harder to detect as a result of the integration of artificial intelligence, making it much more urgent to implement systemic measures to safeguard digital financial ecosystems. 

The digital economy of India, now considered the world's largest in terms of value, is experiencing a troubling increase in cybercrime, as high-value fraud cases increased dramatically during fiscal year 2024. Data from the official government indicates that cyber fraud has resulted in financial losses exceeding $177 crore (approximately $20.3 million), which is a more than twofold increase from the previous fiscal year. 

An equally alarming trend is the sharp increase in the number of significant fraud cases, which are those involving a lakh or more, which has increased from 6,699 in FY 2023 to 29,082 in FY 2024. With this steep increase in digital financial transactions occurring in the millions of cases each day, this demonstrates that India is experiencing increasing vulnerabilities in the digital space. The rapid transformation of India has been driven by affordable internet access, which costs just $11 per hour (about $0.13 per hour) for data packages. 

A $1 trillion mobile payments market has been developed as a result of this affordability, which has been dominated by platforms like Paytm, Google Pay, and the Walmart-backed PhonePe. It is important to note, however, that the growth of this market has outpaced the level of cyber literacy in this country, leaving millions vulnerable to increasingly sophisticated fraud schemes. The cybercriminals now employ an array of advanced techniques that include artificial intelligence tools and deepfake technologies, as well as impersonating authorities, manipulating voice calls, and crafting deceptive messages designed to exploit unsuspecting individuals. 

Increasing digital access and cybersecurity awareness continue to pose a serious threat to individuals and financial institutions alike, raising concerns about consumer safety and long-term digital trust. This is a striking example of how India is becoming more and more involved in cybercrime worldwide. In February 2024, a federal court in Montana, United States, sentenced a 24-year-old Haryanaan to four years in prison, in a case that has become increasingly controversial. In his role as the head of a sophisticated fraud operation, he swindled over $1.2 million from elderly Americans, including a staggering $150,000 from one individual. 

Through deceptive pop-ups that claimed to offer tech support, the scam took advantage of victims' trust by tricking them into granting remote access to their computers. Once the fraudsters gained control of the victim's computer, they manipulated the victim into handing over large amounts of money, which were then collected by coordinated in-person pickups throughout the U.S. This case is indicative of a deeper trend in India's changing digital landscape, which can be observed across many sectors. 

It is widely believed that India’s technology sector was once regarded as a world-class provider of IT support services. However, the industry is facing a profound shift due to a combination of automation, job saturation, and economic pressures, exacerbated by the COVID-19 pandemic. Due to these challenges, a shadow cybercrime economy has emerged that is a mirror image of the formal outsourcing industry in terms of structure, technical sophistication, and international reach, just like the formal outsourcing industry. 

It has become evident over the years that these illicit networks have turned India into one of the key nodes in the global cyber fraud chain by using call centers, messaging platforms, and AI-powered scams – raising serious questions about how technology-enabled crime is being regulated, accountable, and what socio-economic factors are contributing to its growth. A rise in the sophistication of online scams in 2024 has been attributed to the misuse of artificial intelligence (AI), resulting in a dramatic expansion of both their scale and psychological impact as a result of this advancement. 

Fraudsters are now utilising AI-driven tools to create highly convincing content that is designed to deceive and exploit unsuspecting victims by using a variety of methods, from manipulating voices and cloning audio clips to realistic images and deepfake videos. It has been troubling to see the rise of artificial intelligence-assisted voice cloning, a method of fabricating emergency scenarios and extorting money by using the voices of family members, close acquaintances, and others as their voices.

A remarkable amount of accuracy has been achieved with these synthetic voices, allowing emotional manipulation to be effected more easily and more effectively than ever before. It was found in one high-profile case that scammers impersonated Sunil Mittal's voice so that they could mislead company executives into transferring money. It is also becoming increasingly common for deepfake technology to be used to create videos of celebrities and business leaders that have been made up using artificial intelligence. 

These AI tools have made it possible to spread fraudulent content online and are often free of charge. It was reported that prominent personalities like Anant Ambani, Virat Kohli, and MS Dhoni were targeted with deepfake videos being circulated to promote a fake betting application, misinforming thousands of people and damaging public trust through their actions. 

Increasing accessibility and misuse of artificial intelligence tools demonstrate that cybercrime tactics have shifted dangerously, as traditional scams are evolving into emotionally manipulative, convincing operations that use a visual appeal to manipulate the user. As a result of the rise of artificial intelligence-generated deception, law enforcement agencies and tech platforms are challenged to adapt quickly to counter these emerging threats as a result of the wave of deception generated by AI.

During the last few years, the face of online fraud has undergone a radical evolution. Cybercriminals are no longer relying solely on poorly written messages or easily identifiable hoaxes, as they have developed techniques that are near perfection now. A majority of malicious links being circulated today are polished and technically sound, often embedded within well-designed websites with a realistic login interface that is very much like a legitimate platform's, along with HTTPS encryption.

In the past, these threats were relatively easy to identify, but they have become increasingly insidious and difficult to detect even for the digitally literate user. There is an astonishing amount of exposure to these threats. Virtually every person with a cell phone or internet access receives scam messages almost daily. It's important to note that some users can avoid these sophisticated schemes, but others fall victim to these sophisticated schemes, particularly those who are elderly, who are unfamiliar with digital technology, or who are caught unaware. 

There are many devastating effects of such frauds, including the significant financial toll, as well as the emotional distress, which can be long-lasting. Based on projections, there will be a dramatic 75% increase in the amount of revenue lost to cybercrime in India by 2025, a dramatic 75% increase from 2023. This alarming trend points to the need for systemic reform and a collaborative intervention strategy. The most effective way of addressing this challenge is to change a fundamental mindset: cyber fraud is not a single issue affecting others; rather, it is a collective threat affecting all stakeholders in the digital ecosystem.

To respond effectively, telecom operators, financial institutions, government agencies, social media platforms, and over-the-top (OTT) service providers must all cooperate actively to coordinate the response. There is no escaping the fact that cybercrimes are becoming more sophisticated and more prevalent. Experts believe that four key actions can be taken to dismantle the infrastructure that supports digital fraud. 

First, there should be a centralised fraud intelligence bureau where all players in the ecosystem can report and exchange real-time information about cybercriminals so that swift and collective responses are possible. Furthermore, each digital platform should develop and deploy tailored technologies to counter fraud, sharing these solutions across industries to protect users against fraud. Thirdly, an ongoing public awareness campaign should focus on educating users about digital hygiene, as well as common fraud tactics. 

Lastly, there is a need to broaden the regulatory framework to include all digital service providers as well. There are strict regulations in place for telecommunication companies, but OTT platforms remain almost completely unregulated, creating loopholes for fraudsters. As a result, Indian citizens will be able to begin to reclaim the integrity of their digital landscape as well as protect themselves from cybercrime that is escalating, through a strong combination of regulation, innovation, and education, as well as by working together with cross-sectoral collaboration. 

In today's digitally accelerated and cyber-vulnerable India, it is imperative to develop a strategy for combating online fraud that is forward-looking and cohesive. It is no longer appropriate to simply respond to cyber incidents that have already occurred. Instead, a proactive approach must be taken to combat cybersecurity threats, where technology, policy, and public engagement all work in tandem to build cyber resilience. As a result of this, security must be built into digital platforms, continuous threat intelligence is invested in, and targeted education campaigns are implemented to cultivate a national culture of cyber awareness. 

To prevent fraud and safeguard user data, governments must accelerate the implementation of robust regulatory frameworks that hold all providers of digital services accountable. This includes all digital service providers, regardless of size or sector. While the companies must prioritise cybersecurity not simply as a compliance checkbox, but as a business-critical pillar supported by dedicated infrastructure and real-time monitoring systems, they should not overlook it as just another compliance checkbox. 

To anticipate the changes in the cybercrime playbook, there must be a collective will across industries, institutions, and individuals to be able to adapt. To achieve the very promise of India's digital economy, users must transform cybersecurity from a reactive measure into a national imperative. This is a way to ensure that trust is maintained, innovations are protected, and the future of a truly digital Bharat is secured.

Child Abuse Detection Efforts Face Setbacks Due to End-to-End Encryption


 

Technology has advanced dramatically in the last few decades, and data has been exchanged across devices, networks, and borders at a rapid pace. It is imperative to safeguard sensitive information today, as it has never been more important-or more complicated—than it is today. End-to-end encryption is among the most robust tools available for the purpose of safeguarding digital communication, and it ensures that data remains safe from its origin to its destination, regardless of where it was created. 

The benefits of encryption are undeniable when it comes to maintaining privacy and preventing unauthorised access, however, the process of effectively implementing such encryption presents both a practical and ethical challenge for both public organisations as well as private organisations. Several law enforcement agencies and public safety agencies are also experiencing a shift in their capabilities due to the emergence of artificial intelligence (AI). 

Artificial intelligence has access to technologies that support the solving of cases and improving operational efficiency to a much greater degree. AI has several benefits, including facial recognition, head detection, and intelligent evidence management systems. However, the increasing use of artificial intelligence also raises serious concerns about personal privacy, regulatory compliance, and possible data misuse.

A critical aspect of government and organisation adoption of these powerful technologies is striking a balance between harnessing the strengths of artificial intelligence and encryption while maintaining the commitment to public trust, privacy laws, and ethical standards. As a key pillar of modern data protection, end-to-end encryption (E2EE) has become a vital tool for safeguarding digital information. It ensures that only the intended sender and recipient can access the information being exchanged, providing a robust method of protecting digital communication.

It is highly effective for preventing unauthorised access to data by encrypting it at origin and decrypting it only at the destination, even by service providers or intermediaries who manage the data transfer infrastructure. By implementing this secure framework, information is protected from interception, manipulation, or surveillance during its transit, eliminating any potential for interception or manipulation.

A company that handles sensitive or confidential data, especially in the health, financial, or legal sectors, isn't just practising best practices when it comes to encrypting data in a secure manner. It is a strategic imperative that the company adopt this end-to-end encryption technology as soon as possible. By strengthening overall cybersecurity posture, cultivating client trust and ensuring regulatory compliance, these measures strengthen overall cybersecurity posture. 

As the implementation of E2EE technologies has become increasingly important to complying with stringent data privacy laws like the Health Insurance Portability and Accountability Act (HIPAA) in the United States, and the General Data Protection Regulation (GDPR) in Europe, as well as other jurisdictional frameworks, it is increasingly important that the implementation of E2EE technologies is implemented. 

Since cyber threats are on the rise and are both frequent and sophisticated, the implementation of end-to-end encryption is an effective way to safeguard against information exposure in this digital age. With it, businesses can confidently manage digital communication, giving stakeholders peace of mind that their personal and professional data is protected throughout the entire process. While end-to-end encryption is widely regarded as a vital tool for safeguarding digital privacy, its increasing adoption by law enforcement agencies as well as child protection agencies is posing significant challenges to these agencies. 

There have been over 1 million attempts made by New Zealanders to access illegal online material over the past year alone, which range from child sexual abuse to extreme forms of explicit content like bestiality and necrophilia. During these efforts, 13 individuals were arrested for possessing, disseminating, or generating such content, according to the Department of Internal Affairs (DIA). The DIA has expressed concerns about the increasing difficulty in detecting and reacting to criminal activity that is being caused by encryption technologies. 

As the name implies, end-to-end encryption restricts the level of access to message content to just the sender and recipient, thus preventing third parties from monitoring harmful exchanges, including regulatory authorities. Several of these concerns were also expressed by Eleanor Parkes, National Director of End Child Prostitution and Trafficking (ECPAT), who warned that the widespread use of encryption could make it possible for illegal material to circulate undetected. 

Since digital platforms are increasingly focusing on privacy-enhanced technologies, striking a balance between individual rights and collective safety has become an issue not only for technical purposes but also for societal reasons  It has never been more clearly recognised how important it is to ensure users' privacy on the Internet, and standard encryption remains a cornerstone for the protection of their personal information across a wide array of digital services. 

In the banking industry, the healthcare industry, as well as private communications, encryption ensures the integrity and security of information that is being transmitted across networks. This form of technology is called end-to-end encryption (E2EE), which is a more advanced and more restrictive implementation of this technology. It enhances privacy while significantly restricting oversight at the same time. In contrast to traditional methods of encrypting information, E2EE allows only the sender and recipient of the message to access its content. 

As the service provider operating the platform has no power to view or intercept communications, it appears that this is the perfect solution in theory. However, the absence of oversight mechanisms poses serious risks in practice, especially when it comes to the protection of children. Platforms may inadvertently be used as a safe haven for the sharing of illegal material, including images of child sexual abuse, if they do not provide built-in safeguards or the ability to monitor content. Despite this, there remains the troubling paradox: the same technology that is designed to protect users' privacy can also shield criminals from detection, thus creating a troubling paradox. 

As digital platforms continue to place a high value on user privacy, it becomes increasingly important to explore balanced approaches that do not compromise the safety and well-being of vulnerable populations, especially children, that are also being safe. A robust Digital Child Exploitation Filtering System has been implemented by New Zealand's Department of Internal Affairs (DIA) to combat the spread of illegal child sexual abuse material online. This system has been designed to block access to websites that host content that contains child sexual abuse, even when they use end-to-end encryption as part of their encryption method.

Even though encrypted platforms do present inherent challenges, the system has proven to be an invaluable weapon in the fight against the exploitation of children online. In the last year alone, it enabled the execution of 60 search warrants and the seizure of 235 digital devices, which demonstrates how serious the issue is and how large it is. The DIA reports that investigators are increasingly encountering offenders with vast quantities of illegal material on their hands, which not only increases in quantity but also in intensity as they describe the harm they cause to society. 

According to Eleanor Parkes, National Director of End Child Prostitution and Trafficking (ECPAT), the widespread adoption of encryption is indicative of the public's growing concern over digital security. Her statement, however, was based on a recent study which revealed an alarming reality that revealed a far more distressing reality than most people know. Parkes said that young people, who are often engaged in completely normal online interactions, are particularly vulnerable to exploitation in this changing digital environment since child abuse material is alarmingly prevalent far beyond what people might believe. 

A prominent representative of the New Zealand government made a point of highlighting the fact that this is not an isolated or distant issue, but a deeply rooted problem that requires urgent attention and collective responsibility within the country as well as internationally. As technology continues to evolve at an exponential rate, it becomes increasingly important to be sure that, particularly in sensitive areas like child protection, both legally sound and responsible. As with all technological innovations, these tools must be implemented within a clearly defined legislative framework which prioritises privacy while enabling effective intervention within the context of an existing legislative framework.

To detect child sexual abuse material, safeguarding technologies should be used exclusively for that purpose, with the intent of identifying and eliminating content that is clearly harmful and unacceptable. Law enforcement agencies that rely on artificial intelligence-driven systems, such as biometric analysis and head recognition systems, need to follow strict legal frameworks to ensure compliance with complex legal frameworks. As the General Data Protection Regulation (GDPR) is established in the European Union, and the California Consumer Privacy Act (CCPA) is established in the United States, there is a clear understanding of how to handle, consent to, and disclose data. 

The use of biometric data is also tightly regulated, as legislation such as Illinois' Biometric Information Privacy Act (BIPA) imposes very strict limitations on how this data can be used. Increasingly, AI governance policies are being developed at both the national and regional levels, reinforcing the importance of ethical, transparent, and accountable technology use. Noncompliance not only results in legal repercussions, but it also threatens to undermine public trust, which is essential for successfully integrating AI into public safety initiatives. 

The future will require striking a delicate balance between innovation and regulation, ensuring that technology empowers protective efforts while protecting fundamental rights in the meantime. For all parties involved—policymakers, technology developers, law enforcement, as well as advocacy organisations—to address the complex interplay between safeguarding privacy and ensuring child protection, they must come together and develop innovative, forward-looking approaches. The importance of moving beyond the viewpoint of privacy and safety as opposing priorities must be underscored to foster innovations that learn from the past and build strong ethical protections into the core of their designs. 

The steps that must be taken to ensure privacy-conscious technology is developed that can detect harmful content without compromising user confidentiality, that secure and transparent reporting channels are established within encrypted platforms, and that international cooperation is enhanced to combat exploitation effectively and respect data sovereignty at the same time. Further, industry transparency must be promoted through independent oversight and accountability mechanisms to maintain public trust and validate the integrity of these protective measures. 

Regulatory frameworks and technological solutions should be adapted rapidly to safeguard vulnerable populations without sacrificing fundamental rights to keep pace with the rapid evolution of the digital landscape. As the world becomes increasingly interconnected, technology will only be able to fulfil its promise as a force for good if it is properly balanced, ethically robust, and proactive in its approach in terms of the protection of children and ensuring privacy rights for everyone.

Why Microsoft Says DeepSeek Is Too Dangerous to Use

 


Microsoft has openly said that its workers are not allowed to use the DeepSeek app. This announcement came from Brad Smith, the company’s Vice Chairman and President, during a recent hearing in the U.S. Senate. He said the decision was made because of serious concerns about user privacy and the risk of biased content being shared through the app.

According to Smith, Microsoft does not allow DeepSeek on company devices and hasn’t included the app in its official store either. Although other organizations and even governments have taken similar steps, this is the first time Microsoft has spoken publicly about such a restriction.

The main worry is where the app stores user data. DeepSeek's privacy terms say that all user information is saved on servers based in China. This is important because Chinese laws require companies to hand over data if asked by the government. That means any data stored through DeepSeek could be accessed by Chinese authorities.

Another major issue is how the app answers questions. It’s been noted that DeepSeek avoids topics that the Chinese government sees as sensitive. This has led to fears that the app’s responses might be influenced by government-approved messaging instead of being neutral or fact-based.

Interestingly, even though Microsoft is blocking the app itself, it did allow DeepSeek’s AI model—called R1—to be used through its Azure cloud service earlier this year. But that version works differently. Developers can download it and run it on their own servers without sending any data back to China. This makes it more secure, at least in terms of data storage.

However, there are still other risks involved. Even if the model is hosted outside China, it might still share biased content or produce low-quality or unsafe code.

At the Senate hearing, Smith added that Microsoft took extra steps to make the model safer before making it available. He said the company made internal changes to reduce any harmful behavior from the model, but didn’t go into detail about what those changes were.

When DeepSeek was first added to Azure, Microsoft said the model had passed safety checks and gone through deep testing to make sure it met company standards.

Some people have pointed out that DeepSeek could be seen as a competitor to Microsoft’s own chatbot, Copilot. But Microsoft doesn’t block every competing chatbot. For example, Perplexity is available in the Windows app store. Still, some other popular apps, like Google’s Chrome browser and its Gemini chatbot, weren’t found during a search of the store.

AI Can Now Shop for You: Visa’s Smart Payment Platform

 



Visa has rolled out a new system that allows artificial intelligence (AI) to not only suggest items to buy but also complete purchases for users. The newly launched platform, called Visa Intelligent Commerce, lets AI assistants shop on your behalf — while keeping your financial data secure.

The announcement was made recently during Visa’s event in San Francisco. This marks a step towards AI taking on more day-to-day tasks for users, including buying products, booking services, and managing online orders — all based on your instructions.


AI That Does More Than Just Help You Shop

Today, many AI tools can help people find products or services online. But when it comes to actually completing the purchase, those systems often stop short. Visa is working to close that gap by allowing these tools to also handle payments.

To do this, Visa has teamed up with top tech companies like Microsoft, IBM, OpenAI, Samsung, and others. Their combined goal is to build a secure way for AI to handle payments without needing access to your actual card details.


How the Technology Works

Instead of using your real credit or debit card numbers, the platform turns your card information into a digital token — a safe, coded version of your payment data. These tokens are used by approved AI agents when carrying out transactions.

Users still stay in control. You can set limits on how much the AI can spend, pick which types of stores it’s allowed to use, or even require a manual approval before it pays.

For example, you might ask your AI to book a hotel room under a certain price or order groceries every week. The AI would then search websites, compare options, and make the purchase — all without you needing to fill out your payment details each time.


Safety Is the Main Priority

Visa is aware that letting AI spend money on your behalf might raise concerns. That’s why they’ve built strong protections into the system. Only AI agents that you’ve approved can access your tokenized payment info. Every transaction is monitored in real time by Visa’s fraud-detection systems — which have already helped prevent billions in fraud before.

The company is also using data tools that protect your privacy. When the AI needs data to personalize your shopping, it uses temporary access methods that keep you in charge of what’s shared.

Visa believes this could be the next big change in online shopping, similar to past shifts from physical stores to websites, and then from computers to phones. With their global network already in place, Visa is prepared to support this new way of shopping across many countries.

Developers can start using the tools now, and test programs will roll out soon. As AI becomes part of daily life, Visa hopes this new system will make everyday shopping faster, easier, and more secure.