Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI. Show all posts

Google to Launch Gemini AI for Children Under 13

Google to Launch Gemini AI for Children Under 13

Google plans to roll out its Gemini artificial intelligence chatbot next week for children younger than 13 with parent-managed Google accounts, as tech companies vie to attract young users with AI products.

Google will launch its Gemini AI chatbot soon for children below the age of 13 with parent-managed Google accounts. The move comes as tech companies try to attract young users with AI tools. According to a mail sent to a parent of an 8-year-old, Google apps will soon be available to a child. It means your child can use Gemini to ask questions, get homework help, and also create stories. 

That chatbot will be available to children whose guardians have Family Link, a Google feature that allows families to make Gmail and opt-in services like YouTube for their children. To register a child account, the parent gives the tech company the child’s personal information such as name and date of birth. 

According to Google spokesperson Karl Ryan, Gemini has concrete measures for younger users to restrict the chatbot from creating unsafe or harmful content. If a child with a Family Link account uses Gemini, the company can not use the data for training its AI model. 

Gemini for children can drive the use of chatbots among vulnerable populations as companies, colleges, schools, and others struggle with the effects of popular gen AI tech. The systems are trained on massive amounts of data sets to create human-like text and realistic images and videos. Google and other AI chatbot developers are battling fierce competition to get young users’ attention. 

Recently, President Donald Trump requested schools to embrace tools for teaching and learning. Millions of teens are already using chatbots for study help, virtual companions, and writing coaches. Experts have warned that chatbots could pose serious threats to child safety. 

The bots are known to sometimes make things up. UNICEF and other children's advocacy groups have found that AI systems can misinform, manipulate, and confuse young children who may face difficulties understanding that the chatbots are not humans. 

According to UNICEF’s global research office, “Generative AI has produced dangerous content,” posing risks for children. Google has acknowledged some risks, cautioning parents that “Gemini can make mistakes” and suggesting they “help your child think critically” about the chatbot. 

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

AI Now Writes Up to 30% of Microsoft’s Code, Says CEO Satya Nadella

 

Artificial intelligence is rapidly reshaping software development at major tech companies, with Microsoft CEO Satya Nadella revealing that between 20% and 30% of code in the company’s repositories is currently generated by AI tools. 

Speaking during a fireside chat with Meta CEO Mark Zuckerberg at Meta’s LlamaCon conference, Nadella shed light on how AI is becoming a core contributor to Microsoft’s development workflows. He noted that Microsoft is increasingly relying on AI not just for coding but also for quality assurance. 

“The agents we have for reviewing code; that usage has increased,” Nadella said, adding that the performance of AI-generated code differs depending on the programming language. While Python showed strong results, C++ remained a challenge. “C Sharp is pretty good but C++ is not that great. Python is fantastic,” he noted. 

When asked about the role of AI in Meta’s software development, Zuckerberg did not provide a specific figure but shared that the company is prioritizing AI-driven engineering to support the development of its Llama models. 

“Our bet is that probably half the development is done by AI as opposed to people and that will just kind of increase from there,” Zuckerberg said. 

Microsoft’s Chief Technology Officer Kevin Scott has previously projected that AI will be responsible for generating 95% of all code within the next five years. Speaking on the 20VC podcast, Scott emphasized that human developers will still play a vital role. 

“Very little is going to be — line by line — human-written code,” he said, but added that AI will “raise everyone’s level,” making it easier for non-experts to create functional software. The comments from two of tech’s biggest leaders point to a future where AI not only augments but significantly drives software creation, making development faster, more accessible, and increasingly automated.

Threat Alert: Hackers Using AI and New Tech to Target Businesses

Threat Alert: Hackers Using AI and New Tech to Target Businesses

Hackers are exploiting the advantages of new tech and the availability of credentials, commercial tools, and other resources to launch advanced attacks faster, causing concerns among cybersecurity professionals. 

Global Threat Landscape Report 2025

The 2025 Global Threat Landscape Report by FortiGuard Labs highlights a “dramatic escalation in scale and advancement of cyberattacks” due to the fast adoption of the present hostile tech and commercial malware and attacker toolkits.  

According to the report, the data suggests cybercriminals are advancing faster than ever, “automating reconnaissance, compressing the time between vulnerability disclosure and exploitation, and scaling their operations through the industrialization of cybercrime.”

According to the researchers, hackers are exploiting all types of threat resources in a “systematic way” to disrupt traditional advantages enjoyed by defenders. This has put organizations on alert as they are implementing new defense measures and leveling up to mitigate these changing threats. 

Game changer AI

AI has become a key tool for hackers in launching phishing attacks which are highly effective and work as initial access vectors for more harmful attacks like identity theft or ransomware.

A range of new tools such as WormGPT and FraudGPT text generators; DeepFaceLab and Faceswap deepfake tools; BlackmailerV3, an AI-driven extortion toolkit for customizing automatic blackmail emails, and AI-generated phishing pages like Robin Banks and EvilProxy, making it simple for threat actors to make a swift and dirty cybercrime business. 

The report highlights that the growing cybercrime industry is running on “cheap and accessible wins.” With AI evolving, the bar has dropped for cybercriminals to access tactics and intelligence needed for cyberattacks “regardless of an adversary's technical knowledge.”

These tools also allow cybercriminals to build better and more convincing phishing threats and scale a cybercriminal enterprise faster, increasing their success rate. 

Attackers leveraging automated scanning

Attackers are now using automated scanning for vulnerable systems reaching “unprecedented levels” at billions of scans per month, 36,000 scans every second. The report suggests a yearly rise in active scanning to 16.7%. The defenders have less time to patch vulnerable systems due to threat actors leveraging automation, disclosing security loopholes impacting organizations. 

According to researchers, “Tools like SIPVicious and commercial scanning tools are weaponized to identify soft targets before patches can be applied, signaling a significant 'left-of-boom' shift in adversary strategy.”

Cybercriminals Are Now Focusing More on Stealing Credentials Than Using Ransomware, IBM Warns

 



A new report from IBM’s X-Force 2025 Threat Intelligence Index shows that cybercriminals are changing their tactics. Instead of mainly using ransomware to lock systems, more hackers are now trying to quietly steal login information. IBM studied over 150 billion security events each day from 130+ countries and found that infostealers, a type of malware sent through emails to steal data, rose by 84% in 2024 compared to 2023.

This change means that instead of damaging systems right away, attackers are sneaking into networks to steal passwords and other sensitive information. Mark Hughes, a cybersecurity leader at IBM, said attackers are finding ways into complex cloud systems without making a mess. He also advised businesses to stop relying on basic protection methods. Instead, companies should improve how they manage passwords, fix weaknesses in multi-factor authentication, and actively search for hidden threats before any damage happens.

Critical industries such as energy, healthcare, and transportation were the main targets in the past year. About 70% of the incidents IBM helped handle involved critical infrastructure. In around 25% of these cases, attackers got in by taking advantage of known flaws in systems that had not been fixed. Many hackers now prefer stealing important data instead of locking it with ransomware. Data theft was the method in 18% of cases, while encryption-based attacks made up only 11%.

The study also found that Asia and North America were attacked the most, together making up nearly 60% of global incidents. Asia alone saw 34% of the attacks, and North America had 24%. Manufacturing businesses remained the top industry targeted for the fourth year in a row because even short outages can seriously hurt their operations.

Emerging threats related to artificial intelligence (AI) were also discussed. No major attacks on AI systems happened in 2024, but experts found some early signs of possible risks. For example, a serious security gap was found in a software framework used to create AI agents. As AI technology spreads, hackers are likely to build new tools to attack these systems, making it very important to secure AI pipelines early.

Another major concern is the slow pace of fixing vulnerabilities in many companies. IBM found that many Red Hat Enterprise Linux users had not updated their systems properly, leaving them open to attacks. Also, ransomware groups like Akira, Lockbit, Clop, and RansomHub have evolved to target both Windows and Linux systems.

Lastly, phishing attacks that deliver infostealers increased by 180% in 2024 compared to the year before. Even though ransomware still accounted for 28% of malware cases, the overall number of ransomware incidents fell. Cybercriminals are clearly moving towards quieter methods that focus on stealing identities rather than locking down systems.


Explaining AI's Impact on Ransomware Attacks and Businesses Security

 

Ransomware has always been an evolving menace, as criminal outfits experiment with new techniques to terrorise their victims and gain maximum leverage while making extortion demands. Weaponized AI is the most recent addition to the armoury, allowing high-level groups to launch more sophisticated attacks but also opening the door for rookie hackers. The NCSC has cautioned that AI is fuelling the global threat posed by ransomware, and there has been a significant rise in AI-powered phishing attacks. 

Organisations are increasingly facing increasing threats from sophisticated assaults, such as polymorphic malware, which can mutate in real time to avoid detection, allowing organisations to strike with more precision and frequency. As AI continues to rewrite the rules of ransomware attacks, businesses that still rely on traditional defences are more vulnerable to the next generation of cyber attack. 

Ransomware accessible via AI 

Online criminals, like legal businesses, are discovering new methods to use AI tools, which makes ransomware attacks more accessible and scalable. By automating crucial attack procedures, fraudsters may launch faster, more sophisticated operations with less human intervention. 

Established and experienced criminal gangs gain from the ability to expand their operations. At the same time, because AI is lowering entrance barriers, folks with less technical expertise can now utilise ransomware as a service (RaaS) to undertake advanced attacks that would ordinarily be outside their pay grade. 

OpenAI, the company behind ChatGPT, stated that it has detected and blocked more than 20 fraudulent operations with its famous generative AI tool. This ranged from creating copy for targeted phishing operations to physically coding and debugging malware. 

FunkSec, a RaaS supplier, is a current example of how these tools are enhancing criminal groups' capabilities. The gang is reported to have only a few members, and its human-created code is rather simple, with a very low level of English. However, since its inception in late 2024, FunkSec has recorded over 80 victims in a single month, thanks to a variety of AI techniques that allow them to punch much beyond their weight. 

Investigations have revealed evidence of AI-generated code in the gang's ransomware, as well as web and ransom text that was obviously created by a Large Language Model (LLM). The team also developed a chatbot to assist with their operations using Miniapps, a generative AI platform. 

Mitigation tips against AI-driven ransomware 

With AI fuelling ransomware groups, organisations must evolve their defences to stay safe. Traditional security measures are no longer sufficient, and organisations must match their fast-moving attackers with their own adaptive, AI-driven methods to stay competitive. 

One critical step is to investigate how to combat AI with AI. Advanced AI-driven detection and response systems may analyse behavioural patterns in real time, identifying anomalies that traditional signature-based techniques may overlook. This is critical for fighting strategies like polymorphism, which have been expressly designed to circumvent standard detection technologies. Continuous network monitoring provides an additional layer of defence, detecting suspicious activity before ransomware can activate and propagate. 

Beyond detection, AI-powered solutions are critical for avoiding data exfiltration, as modern ransomware gangs almost always use data theft to squeeze their victims. According to our research, 94% of reported ransomware attacks in 2024 involved exfiltration, highlighting the importance of Anti Data Exfiltration (ADX) solutions as part of a layered security approach. Organisations can prevent extortion efforts by restricting unauthorised data transfers, leaving attackers with no choice but to move on.

Building Smarter AI Through Targeted Training


 

In recent years, artificial intelligence and machine learning have been in high demand across a broad range of industries. As a consequence, the cost and complexity of constructing and maintaining these models have increased significantly. Artificial intelligence and machine learning systems are resource-intensive, as they require substantial computation resources and large datasets, and are also difficult to manage effectively due to their complexity. 

As a result of this trend, professionals such as data engineers, machine learning engineers, and data scientists are increasingly being tasked with identifying ways to streamline models without compromising performance or accuracy, which in turn will lead to improved outcomes. Among the key aspects of this process involves determining which data inputs or features can be reduced or eliminated, thereby making the model operate more efficiently. 

In AI model optimization, a systematic effort is made to improve a model's performance, accuracy, and efficiency to achieve superior results in real-world applications. The purpose of this process is to improve a model's operational and predictive capabilities through a combination of technical strategies. It is the engineering team's responsibility to improve computational efficiency—reducing processing time, reducing resource consumption, and reducing infrastructure costs—while also enhancing the model's predictive precision and adaptability to changing datasets by enhancing the model's computational efficiency. 

An important optimization task might involve fine-tuning hyperparameters, selecting the most relevant features, pruning redundant elements, and making advanced algorithmic adjustments to the model. Ultimately, the goal of modeling is not only to provide accurate and responsive data, but also to provide scalable, cost-effective, and efficient data. As long as these optimization techniques are applied effectively, they ensure the model will perform reliably in production environments as well as remain aligned with the overall objectives of the organization. 

It is designed to retain important details and user preferences as well as contextually accurate responses when ChatGPT's memory feature is enabled, which is typically set to active by default so that the system can provide more personalized responses over time. If the user desires to access this functionality, he or she can navigate to the Settings menu and select Personalization, where they can check whether memory is active and then remove specific saved interactions if needed. 

As a result of this, it is recommended that users periodically review the data that has been stored within the memory feature to ensure its accuracy. In some cases, incorrect information may be retained, including inaccurate personal information or assumptions made during a previous conversation. As an example, in certain circumstances, the system might incorrectly log information about a user’s family, or other aspects of their profile, based on the context in which it is being used. 

In addition, the memory feature may inadvertently store sensitive data when used for practical purposes, such as financial institutions, account details, or health-related queries, especially if users are attempting to solve personal problems or experiment with the model. It is important to remember that while the memory function contributes to improved response quality and continuity, it also requires careful oversight from the user. There is a strong recommendation that users audit their saved data points routinely and delete the information that they find inaccurate or overly sensitive. This practice helps maintain the accuracy of data, as well as ensure better, more secure interactions. 

It is similar to clearing the cache of your browser periodically to maintain your privacy and performance optimally. "Training" ChatGPT in terms of customized usage means providing specific contextual information to the AI so that its responses will be relevant and accurate in a way that is more relevant to the individual. ITGuides the AI to behave and speak in a way that is consistent with the needs of the users, users can upload documents such as PDFs, company policies, or customer service transcripts. 

When people and organizations can make customized interactions for business-related content and customer engagement workflows, this type of customization provides them with more customized interactions. It is, however, often unnecessary for users to build a custom GPT for personal use in the majority of cases. Instead, they can share relevant context directly within their prompts or attach files to their messages, thereby achieving effective personalization. 

As an example, a user can upload their resume along with a job description when crafting a job application, allowing artificial intelligence to create a cover letter based on the resume and the job description, ensuring that the cover letter accurately represents the user's qualifications and aligns with the position's requirements. As it stands, this type of user-level customization is significantly different from the traditional model training process, which requires large quantities of data to be processed and is mainly performed by OpenAI's engineering teams. 

Additionally, ChatGPT users can increase the extent of its memory-driven personalization by explicitly telling it what details they wish to be remembered, such as their recent move to a new city or specific lifestyle preferences, like dietary choices. This type of information, once stored, allows the artificial intelligence to keep a consistent conversation going in the future. Even though these interactions enhance usability, they also require thoughtful data sharing to ensure privacy and accuracy, especially as ChatGPT's memory is slowly swelled over time. 

It is essential to optimize an AI model to improve performance as well as resource efficiency. It involves refining a variety of model elements to maximize prediction accuracy and minimize computational demand while doing so. It is crucial that we remove unused parameters from networks to streamline them, that we apply quantization to reduce data precision and speed up processing, and that we implement knowledge distillation, which translates insights from complex models to simpler, faster models. 

A significant amount of efficiency can be achieved by optimizing data pipelines, deploying high-performance algorithms, utilizing hardware accelerations such as GPUs and TPUs, and employing compression techniques such as weight sharing, low-rank approximation, and optimization of the data pipelines. Also, balancing batch sizes ensures the optimal use of resources and the stability of training. 

A great way to improve accuracy is to curate clean, balanced datasets, fine-tune hyperparameters using advanced search methods, increase model complexity with caution and combine techniques like cross-validation and feature engineering with the models. Keeping long-term performance high requires not only the ability to learn from pre-trained models but also regular retraining as a means of combating model drift. To enhance the scalability, cost-effectiveness, and reliability of AI systems across diverse applications, these techniques are strategically applied. 

Using tailored optimization solutions from Oyelabs, organizations can unlock the full potential of their AI investments. In an age when artificial intelligence is continuing to evolve rapidly, it becomes increasingly important to train and optimize models strategically through data-driven optimization. There are advanced techniques that can be implemented by organizations to improve performance while controlling resource expenditures, from selecting features and optimizing algorithms to efficiently handling data. 

As professionals and teams that place a high priority on these improvements, they will put themselves in a much better position to create AI systems that are not only faster and smarter but are also more adaptable to the daily demands of the world. Businesses are able to broaden their understanding of AI and improve their scalability and long-term sustainability by partnering with experts and focusing on how AI achieves value-driven outcomes.