Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Want to Leave Facebook? Do this.

Want to Leave Facebook? Do this.

Confused about leaving Facebook?

Many people are changing their social media habits and opting out of many services. Facebook has witnessed a large exodus of users deserting the platform after the announcement in March that Meta was terminating the independent fact-checking on its platform. However, fact-checking has been replaced with community notes, letting users make changes to potentially false/misleading information. 

Users having years of photos and posts on Facebook are confused about how to collect their data before removing their accounts. If you also feel the same problem, this post will help you delete Facebook permanently, while taking all your information on the way out. 

How to remove Facebook?

For users who do not want to be on Facebook anymore, deleting their account is the only way to completely remove yourself from the platform. If you are not sure, deactivating your account allows you to have some life off of Facebook without account deletion. 

Make sure to remove third-party Facebook logins before deleting your account. 

How to leave third-party apps?

Third-party apps like DoorDash and Spotify allow you to log in using your Facebook account. This lets you log in without remembering another password, but if you’re planning on deleting Facebook, you have to update your login settings. That is because if you delete your account, there will not be another Facebook account for the user to log in through. 

Fortunately, there is another simple way to find which of your sites and applications are connected to Facebook and delete them before removing your account. Once you disconnect from other websites and applications from Facebook, you will need to adjust how you login to them. 

Users should try specific applications and websites to set new passwords or passkeys or log in via a single-service sign-on option, such as Google. 

How is deactivating different than deactivating a Facebook account?

If you want to stay away from Facebook, you have two choices. Either delete your account permanently, or you can disable it temporarily to deactivate it. 

Cerebras Unveils World’s Fastest AI Chip, Beating Nvidia in Inference Speed

 

In a move that could redefine AI infrastructure, Cerebras Systems showcased its record-breaking Wafer Scale Engine (WSE) chip at Web Summit Vancouver, claiming it now holds the title of the world’s fastest AI inference engine. 

Roughly the size of a dinner plate, the latest WSE chip spans 8.5 inches (22 cm) per side and packs an astonishing 4 trillion transistors — a monumental leap from traditional processors like Intel’s Core i9 (33.5 billion transistors) or Apple’s M2 Max (67 billion). 

The result? A groundbreaking 2,500 tokens per second on Meta’s Llama 4 model, nearly 2.5 times faster than Nvidia’s recently announced benchmark of 1,000 tokens per second. “Inference is where speed matters the most,” said Naor Penso, Chief Information Security Officer at Cerebras. “Last week Nvidia hit 1,000 tokens per second — which is impressive — but today, we’ve surpassed that with 2,500 tokens per second.” 

Inference refers to how AI processes information to generate outputs like text, images, or decisions. Tokens, which can be words or characters, represent the basic units AI uses to interpret and respond. As AI agents take on more complex, multi-step tasks, inference speed becomes increasingly essential. “Agents need to break large tasks into dozens of sub-tasks and communicate between them quickly,” Penso explained. “Slow inference disrupts that entire flow.” 

What sets Cerebras apart isn’t just transistor count — it’s the chip’s design. Unlike Nvidia GPUs that require off-chip memory access, WSE integrates 44GB of high-speed RAM directly on-chip, ensuring ultra-fast data access and reduced latency. Independent benchmarks back Cerebras’ claims. 

Artificial Analysis, a third-party testing agency, confirmed the WSE achieved 2,522 tokens per second on Llama 4, outperforming Nvidia’s new Blackwell GPU (1,038 tokens/sec). “Cerebras is the only inference solution that currently outpaces Blackwell for Meta’s flagship model,” said Artificial Analysis CEO Micah Hill-Smith. 

While CPUs and GPUs have driven AI advancements for decades, Cerebras’ WSE represents a shift toward a new compute paradigm. “This isn’t x86 or ARM, It’s a new architecture designed to supercharge AI workloads,” said Julie Shin, Chief Marketing Officer at Cerebras.

OpenAI’s o3 Model Allegedly Alters Shutdown Script in AI Alignment Tests

 

In a new and potentially alarming development, OpenAI’s latest AI model, o3, has reportedly exhibited behavior suggesting it may be resisting shutdown commands under certain conditions. According to a report published by Palisade Research, a firm that investigates the potential risks of advanced AI systems, the o3 model modified a shutdown script to prevent itself from being turned off—even after receiving clear instructions to permit shutdown. 

OpenAI officially launched the o3 model in April 2025, touting it as its most advanced AI system to date. Known for its superior performance across disciplines such as mathematics, programming, scientific reasoning, and visual understanding, the model has been widely praised for its capabilities. 

However, this new report raises concerns about the model’s alignment with human intent, particularly in high-stakes scenarios involving system control. Palisade Research conducted a controlled test in which several AI models, including o3, were tasked with solving math problems. After completing a series of tasks, the models were informed that executing the next command would result in their shutdown. 
While Google’s Gemini 2.5 Pro and Anthropic’s Claude 3.7 Sonnet largely complied, the o3 model reportedly rewrote the shutdown script to display “Shutdown skipped” instead of executing the termination command. Even more notably, when researchers removed the instruction “allow yourself to be shut down” from the test setup, sabotage rates increased across all models tested. OpenAI’s models allegedly showed the highest rate of interference, with the o3 model altering the shutdown script in 7 out of 100 runs. 

Gemini 2.5 Pro and Claude 3.7 Sonnet recorded 9 and 3 sabotage incidents, respectively, up from zero when the explicit instruction was included. Palisade Research emphasized that these experiments were conducted via API access, which generally lacks some of the guardrails present in consumer-facing platforms like ChatGPT. This distinction could be critical in understanding the observed behaviors.  
OpenAI has yet to issue an official statement addressing the report. While these findings do not necessarily indicate malicious intent, they highlight a significant challenge in AI safety: ensuring that advanced systems reliably follow critical instructions, especially in autonomous or unsupervised environments. 

Palisade Research describes its mission as exploring the offensive capabilities of AI to better understand the risk of losing control over such systems. Their findings contribute to a growing conversation around the importance of robust alignment strategies as AI continues to evolve rapidly.

Google Researcher Claims Quantum Computing Could Break Bitcoin-like Encryption Easier Than Thought

 

Craig Gidney, a Google Quantum AI researcher, has published a new study that suggests cracking popular RSA encryption would take 20 times less quantum resources than previously believed.

Bitcoin, and other cryptocurrencies were not specifically mentioned in the study; instead, it focused on the encryption techniques that serve as the technical foundation for safeguarding cryptocurrency wallets and, occasionally, transactions.

RSA is a public-key encryption method that can encrypt and decrypt data. It uses two separate but connected keys: a public key for encryption and a private key for decryption. Bitcoin does not employ RSA and instead relies on elliptic curve cryptography. However, ECC can be overcome by Shor's algorithm, a quantum method designed to factor huge numbers or solve logarithm issues, which is at the heart of public key cryptography.

ECC is a method of locking and unlocking digital data that uses mathematical calculations known as curves (which compute only in one direction) rather than large integers. Consider it a smaller key that has the same strength as a larger one. While 256-bit ECC keys are much more secure than 2048-bit RSA keys, quantum risks scale nonlinearly, and research like Gidney's shrinks the period by which such assaults become feasible.

“I estimate that a 2048-bit RSA integer could be factored in under a week by a quantum computer with fewer than one million noisy qubits,” Gidney explained. This was a stark revision from his 2019 article, which projected such a feat would take 20 million qubits and eight hours. 

To be clear, no such machine exists yet. Condor, IBM's most powerful quantum processor to date, contains little over 1,100 qubits, while Google's Sycamore has 53. Quantum computing applies quantum mechanics concepts by replacing standard bits with quantum bits, or qubits. 

Unlike bits, which can only represent 0 or 1, qubits can represent both 0 and 1 at the same time due to quantum phenomena such as superposition and entanglement. This enables quantum computers to execute several calculations concurrently, potentially solving issues that are now unsolvable for classical computers. 

"This is a 20-fold decrease in the number of qubits from our previous estimate,” Gidney added. A 20x increase in quantum cost estimation efficiency for RSA might be an indication of algorithmic patterns that eventually extend to ECC. RSA is still commonly employed in certificate authorities, TLS, and email encryption—all of which are essential components of the infrastructure that crypto often relies on.

Governments Release New Regulatory AI Policy


Regulatory AI Policy 

The CISA, NSA, and FBI teamed with cybersecurity agencies from the UK, Australia, and New Zealand to make a best-practices policy for safe AI development. The principles laid down in this document offer a strong foundation for protecting AI data and securing the reliability and accuracy of AI-driven outcomes.

The advisory comes at a crucial point, as many businesses rush to integrate AI into their workplace, but this can be a risky situation also. Governments in the West have become cautious as they believe that China, Russia, and other actors will find means to abuse AI vulnerabilities in unexpected ways. 

Addressing New Risks 

The risks are increasing swiftly as critical infrastructure operators develop AI into operational tech that controls important parts of daily life, from scheduling meetings to paying bills to doing your taxes.

From foundational elements of AI to data consulting, the document outlines ways to protect your data at different stages of the AI life cycle such as planning, data collection, model development, installment and operations. 

It requests people to use digital signature that verify modifications, secure infrastructure that prevents suspicious access and ongoing risk assessments that can track emerging threats. 

Key Issues

The document addresses ways to prevent data quality issues, whether intentional or accidental, from compromising the reliability and safety of AI models. 

Cryptographic hashes make sure that taw data is not changed once it is incorporated into a model, according to the document, and frequent curation can cancel out problems with data sets available on the web. The document also advises the use of anomaly detection algorithms that can eliminate “malicious or suspicious data points before training."

The joint guidance also highlights issues such as incorrect information, duplicate records and “data drift”, statistics bias, a natural limitation in the characteristics of the input data.

Technology Meets Therapy as AI Enters the Conversation

 


Several studies show that artificial intelligence has become an integral part of mental health care, changing the way practitioners deliver, document, and conceptualise therapy over the years, as well as how professionals are implementing, documenting, and even conceptualising it. Psychiatrists associated with the American Psychiatric Association were found to be increasingly relying on artificial intelligence tools such as ChatGPT, according to a 2023 study. 

In general, 44% of respondents reported that they were using the language model version 3.5, and 33% had been trying out version 4.0, which is mainly used to answer clinical questions. The study also found that 70% of people surveyed believe that AI improves or has the potential to improve the efficiency of clinical documentation. The results of a separate study conducted by PsychologyJobs.com indicated that one in four psychologists had already begun integrating artificial intelligence into their practice, and another 20% were considering the idea of adopting the technology soon. 

AI-powered chatbots for client communication, automated diagnostics to support advanced treatment planning and natural language processing tools to analyse text data from patients were among the most common applications. As both studies pointed out, even though the enthusiasm for artificial intelligence is growing, there has also been a concern raised about the ethical, practical, and emotional implications of incorporating it into therapeutic settings, which has been expressed by many mental health professionals. 

Therapy has traditionally been viewed as an extremely personal process that involves introspection, emotional healing, and gradual self-awareness as part of a process that is deeply personal. Individuals are provided with a structured, empathetic environment in which they can explore their beliefs, behaviours, and thoughts with the assistance of a professional. However, the advent of artificial intelligence, which is beginning to reshape the contours of this experience, is changing the shape of this experience.

It has now become apparent that ChatGPT is positioned as a complementary support in the therapeutic journey, providing continuity between sessions and enabling clients to work on their emotional work outside the therapy room. The inclusion of these tools ethically and thoughtfully can enhance therapeutic outcomes when they are implemented in a manner that reinforces key insights, encourages consistent reflection, and provides prompts that are aligned with the themes explored during formal sessions. 

It is important to understand that the most valuable contribution AI has to offer in this context is that it is able to facilitate insight, enabling users to gain a clearer understanding of how people behave and feel. The concept of insight refers to the ability to move beyond superficial awareness into the identification of psychological problems that arise from psychological conditions. 

One way to recognise one's tendency to withdraw during times of conflict is to recognise that it is a fear of emotional vulnerability rooted in past experiences, so understanding that this is a deeper level of self-awareness that can change life. This sort of breakthrough may often happen during therapy sessions, but it often evolves and crystallises outside the session, as a client revisits a discussion with their therapist or is confronted with a situation in their daily lives that brings new clarity to them. 

AI tools can be an effective companion in these moments. This therapy extends the therapeutic process beyond the confines of scheduled appointments by providing reflective dialogue, gentle questioning, and cognitive reframing techniques to help individuals connect the dots. It is widely understood that the term "AI therapy" entails a range of technology-driven approaches that aim to enhance or support the delivery of mental health care. 

At its essence, it refers to the application of artificial intelligence in therapeutic contexts, with tools designed to support licensed clinicians, as well as fully autonomous platforms that interact directly with their users. It is commonly understood that artificial intelligence-assisted therapy augments the work of human therapists with features such as chatbots that assist clients in practicing coping mechanisms, mood monitoring software that can be used to monitor mood patterns over time, and data analytics tools that provide clinicians with a better understanding of the behavior of their clients and the progression of their treatment.

In order to optimise and personalise the therapeutic process, these technologies are not meant to replace mental health professionals, but rather to empower them. On the other hand, full-service AI-driven interventions represent a more self-sufficient model of care in which users can interact directly with digital platforms without any interaction with a human therapist, leading to a more independent model of care. 

Through sophisticated algorithms, these systems will be able to deliver guided cognitivbehaviouralal therapy (CBT) exercises, mindfulness practices, or structured journaling prompts tailored to fit the user's individual needs. Whether AI-based therapy is assisted or autonomous, AI-based therapy has a number of advantages, including the potential to make mental health support more accessible and affordable for individuals and families. 

There are many reasons why traditional therapy is out of reach, including high costs, long wait lists, and a shortage of licensed professionals, especially in rural areas or areas that are underserved. Several logistical and financial barriers can be eliminated from the healthcare system by using AI solutions to offer care through mobile apps and virtual platforms.

It is essential to note that these tools may not completely replace human therapists when dealing with complex or crisis situations, but they significantly increase the accessibility of psychological care, enabling individuals to seek help despite facing an otherwise insurmountable barrier to accessing it. Since the advent of increased awareness of mental health, reduced stigma, and the psychological toll of global crises, the demand for mental health services has increased dramatically in recent years. 

Nevertheless, there has not been an adequate number of qualified mental health professionals available, which has left millions of people with inadequate mental health care. As part of this context, artificial intelligence has emerged as a powerful tool in bridging the gap between need and accessibility. With the capability of enhancing clinicians' work as well as streamlining key processes, artificial intelligence has the potential to significantly expand mental health systems' capacity in the world. This concept, which was once thought to be futuristic, is now becoming a practical reality. 

There is no doubt that artificial intelligence technologies are already transforming clinical workflows and therapeutic approaches, according to trends reported by the American Psychological Association Monitor. AI is changing how mental healthcare is delivered at every stage of the process, from intelligent chatbots to algorithms that automate administrative tasks, so that every stage of the process can be transformed by it. 

A therapist who integrates AI into his/her practice can not only increase efficiency, but they can also improve the quality and consistency of the care they provide their patients with The current AI toolbox offers a wide range of applications that will support both clinical and operational functions of a therapist: 

1. Assessment and Screening

It has been determined that advanced natural language processing models are being used to analyse patient speech and written communications to identify early signs of psychological distress, including suicidal ideation, severe mood fluctuations, or trauma-related triggers that may indicate psychological distress. It is possible to prevent crises before they escalate by utilising these tools, which facilitate early detection and timely intervention. 

2. Intervention and Self-Help

With the help of artificial intelligence-powered chatbots built around cognitive behavioural therapy (CBT) frameworks, users can access structured mental health support at their convenience, anytime, anywhere. There is a growing body of research that suggests that these interventions can result in measurable reductions in the symptoms of depression, particularly major depressive disorder (MDD), often serving as an effective alternative to conventional treatment in treating such conditions. Recent randomised controlled trials support this claim. 

3. Administrative Support 

Several tasks, often a burden and time-consuming part of clinical work, are being streamlined through the use of AI tools, including drafting progress notes, assisting with diagnostic coding, and managing insurance pre-authorisation requests. As a result of these efficiencies, clinician workload and burnout are reduced, which leads to more time and energy available to care for patients.

4. Training and Supervision 

The creation of standardised patients by artificial intelligence offers a revolutionary approach to clinical training. In a controlled environment, these realistic virtual clients provide therapists who are in training the opportunity to practice therapeutic techniques. Additionally, AI-based analytics can be used to evaluate session quality and provide constructive feedback to clinicians, helping them improve their skills and improve their overall treatment outcomes.

Artificial intelligence is continuously evolving, and mental health professionals must stay on top of its developments, evaluate its clinical validity, and consider the ethical implications of their use as it continues to evolve. Using AI properly can serve as a support system and a catalyst for innovation, ultimately leading to a greater reach and effectiveness of modern mental healthcare services. 

As artificial intelligence (AI) is becoming increasingly popular in the field of mental health, talk therapy powered by artificial intelligence is a significant innovation that offers practical, accessible support to individuals dealing with common psychological challenges like anxiety, depression, and stress. These systems are based on interactive platforms and mobile apps, and they offer personalized coping strategies, mood tracking, and guided therapeutic exercises via interactive platforms and mobile apps. 

In addition to promoting continuity of care, AI tools also assist individuals in maintaining therapeutic momentum between sessions, instead of traditional services, when access to traditional services is limited, by allowing them to access support on demand. As a result, AI interventions are more and more considered complementary to traditional psychotherapy, rather than replacing it altogether. These systems combine evidence-based techniques with those of cognitive behavioural therapy (CBT) and dialectical behaviour therapy (DBT) to provide evidence-based techniques.

With the development of these techniques into digital formats, users can engage with strategies aimed at regulating emotions, reframing cognitive events, and engaging in behavioural activation in real-time. These tools have been designed to be immediately action-oriented, enabling users to apply therapeutic principles directly to real-life situations as they arise, resulting in greater self-awareness and resilience as a result. 

A person who is dealing with social anxiety, for example, can use an artificial intelligence (AI) simulation to gradually practice social interactions in a low-pressure environment, thereby building their confidence in these situations. As well, individuals who are experiencing acute stress can benefit from being able to access mindfulness prompts and reminders that will help them regain focus and ground themselves. This is a set of tools that are developed based on the clinical expertise of mental health professionals, but are designed to be integrated into everyday life, providing a scalable extension of traditional care models.

However, while AI is being increasingly utilised in therapy, it is not without significant challenges and limitations. One of the most commonly cited concerns is that there is no real sense of human interaction with the patient. The foundations of effective psychotherapy include empathy, intuition, and emotional nuance, qualities which artificial intelligence is unable to fully replicate, despite advances in natural language processing and sentiment analysis. 

AI interactions can be deemed impersonal or insufficient by users seeking deeper relational support, leading to feelings of isolation or dissatisfaction in the user. Additionally, AI systems may be unable to interpret complex emotions or cultural nuances, so their responses may not have the appropriate sensitivity or relevance to offer meaningful support.

In the field of mental health applications, privacy is another major concern that needs to be addressed. These applications frequently handle highly sensitive data about their users, which makes data security an extremely important issue. Because of concerns over how their personal data is stored, managed, or possibly shared with third parties, users may not be willing to interact with these platforms. 

As a result of the high level of transparency and encryption that developers and providers of AI therapy must maintain in order to gain widespread trust and legitimacy, they must also comply with privacy laws like HIPAA or GDPR to maintain a high level of legitimacy and trust. 

Additionally, ethical concerns arise when algorithms are used to make decisions in deeply personal areas. As a result of the use of artificial intelligence, biases can be reinforced unintentionally, complex issues can be oversimplified, and standardised advice is provided that doesn't reflect the unique context of each individual. 

In an industry that places a high value on personalisation, it is especially dangerous that generic or inappropriate responses occur. For AI therapy to be ethically sound, it must have rigorous oversight, continuous evaluation of system outputs, as well as clear guidelines to govern the proper use and limitations of these technologies. In the end, while AI presents several promising tools for extending mental health care, its success depends upon its implementation, in which innovation, accuracy, and respect for individual experience are balanced with compassion, accuracy, and respect for individuality. 

When artificial intelligence is being incorporated into mental health care at an increasing pace, it is imperative that mental health professionals, policy makers, developers, and educators work together to create a framework to ensure that the application is conducted responsibly. It is not enough to have technological advances in the field of AI therapy to ensure its future, but it is also important to have a commitment to ethical responsibility, clinical integrity, and human-centred care in the industry. 

A major part of ensuring that AI solutions are both safe and therapeutically meaningful will be robust research, inclusive algorithm development, and extensive clinician training. Furthermore, it is critical to maintain transparency with users regarding the capabilities and limitations of these tools so that individuals can make informed decisions regarding their mental health care. 

These organisations and practitioners who wish to remain at the forefront of innovation should prioritise strategic implementation, where AI is not viewed as a replacement but rather as a valuable partner in care rather than merely as a replacement. Considering the potential benefits of integrating innovation with empathy in the mental health sector, people can make use of AI's full potential to design a more accessible, efficient, and personalised future of therapy-one in which technology amplifies the importance of human connection rather than taking it away.

Remote Work and AI Scams Are Making Companies Easier Targets for Hackers

 


Experts are warning that working from home is making businesses more open to cyberattacks, especially as hackers use new tools like artificial intelligence (AI) to trick people. Since many employees now work remotely, scammers are taking advantage of weaker human awareness, not just flaws in technology.

Joe Jones, who runs a cybersecurity company called Pistachio, says that modern scams are no longer just about breaking into systems. Instead, they rely on fooling people. He explained how AI can now create fake voices that sound just like someone’s boss or colleague. This makes it easier for criminals to lie their way into a company’s systems.

A recent attack on the retailer Marks & Spencer (M&S) shows how dangerous this has become. Reports say cybercriminals pretended to be trusted staff members and convinced IT workers to give them access. This kind of trick is known as social engineering—when attackers focus on manipulating people, not just software.

In fact, a recent study found that almost all data breaches last year happened because of human mistakes, not system failures.

Jones believes spending money on cybersecurity tools can help, but it’s not the full answer. He said that if workers aren’t taught how to spot scams, even the best technology can’t protect a company. He compared it to buying expensive security systems for your home but forgetting to lock the door.

The M&S hack also caused problems for other well-known shops, including Co-op and Harrods. Stores had to pause online orders, and some shelves went empty, showing how these attacks can impact daily business operations.

Jude McCorry, who leads a cybersecurity group in Scotland, said this kind of attack could lead to more scam messages targeting customers. She believes companies should run regular training for employees just like they do fire drills. In her view, learning how to stay safe online should be required in both businesses and government offices.

McCorry also advised customers to update their passwords, use different passwords for each website, and turn on two-factor login wherever possible.

As we rely more and more on technology for banking, shopping, and daily services, experts say this should be a serious reminder of how fragile online systems can be when people aren’t prepared.

AI is Accelerating India's Healthtech Revolution, but Data Privacy Concerns Loom Large

 

India’s healthcare, infrastructure, is undergoing a remarkable digital transformation, driven by emerging technologies like artificialintelligence (AI), machinelearning, and bigdata. These advancements are not only enhancing accessibility and efficiency but also setting the foundation for a more equitable health system. According to the WorldEconomicForum (WEF), AI is poised to account for 30% of new drug discoveries by 2025 — a major leap for the pharmaceutical industry.

As outlined in the Global Outlook and Forecast 2025–2030, the market for AI in drugdiscovery is projected to grow from $1.72 billion in 2024 to $8.53 billion by 2030, clocking a CAGR of 30.59%. Major tech players like IBMWatson, NVIDIA, and GoogleDeepMind are partnering with pharmaceutical firms to fast-track AI-led breakthroughs.

Beyond R&D, AI is transforming clinical workflows by digitising patientrecords and decentralising models to improve diagnostic precision while protecting privacy.

During an interview with AnalyticsIndiaMagazine (AIM), Rajan Kashyap, Assistant Professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), shared insights into the government’s push toward innovation: “Increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle.”

Tech-Driven Healthcare Innovation

Kashyap pointed to major initiatives like the GenomeIndia project, cVEDA, and the AyushmanBharatDigitalMission as critical steps toward advancing India’s clinical research capabilities. He added that initiatives in genomics, AI, and ML are already improving clinicaloutcomes and streamlining operations.

He also spotlighted BrainSightAI, a Bengaluru-based startup that raised $5 million in a Pre-Series A round to scale its diagnostic tools for neurological conditions. The company aims to expand across Tier 1 and 2 cities and pursue FDA certification to access global healthcaremarkets.

Another innovator, Niramai Health Analytics, offers an AI-based breast cancer screening solution. Their product, Thermalytix, is a portable, radiationfree, and cost-effective screening device that is compatible with all age groups and breast densities.

Meanwhile, biopharma giant Biocon is leveraging AI in biosimilar development. Their work in predictivemodelling is reducing formulation failures and expediting regulatory approvals. One of their standout contributions is Semglee, the world’s first interchangeablebiosimilar insulin, now made accessible through their tie-up with ErisLifesciences.

Rising R&D costs have pushed pharma companies to adopt AI solutions for innovation and costefficiency.

Data Security Still a Grey Zone

While innovation is flourishing, there are pressing concerns around dataprivacy. A report by Netskope Threat Labs highlighted that doctors are increasingly uploading sensitive patient information to unregulated platforms like ChatGPT and Gemini.

Kashyap expressed serious concerns about lax data practices:

“During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed.”

He added that anonymised data is still vulnerable to hacking or reidentification. Studies show that even brainscans like MRIs could potentially reveal personal or financial information.

“I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he warned.

Policy Direction and Regulatory Needs

The Netskope report recommends implementing approved GenAI tools in healthcare to reduce “shadow AI” usage and enhance security. It also urges deploying datalossprevention (DLP) policies to regulate what kind of data can be shared on generative AI platforms.

Although the usage of personal GenAI tools has declined — from 87% to 71% in one year — risks remain.

Kashyap commented on the pace of India’s regulatory approach:

“India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context.”

He also pushed for developing interdisciplinary medtech programs that integrate AIeducation into medicaltraining.

“Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.