Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Google to Launch Gemini AI for Children Under 13

Google to Launch Gemini AI for Children Under 13

Google plans to roll out its Gemini artificial intelligence chatbot next week for children younger than 13 with parent-managed Google accounts, as tech companies vie to attract young users with AI products.

Google will launch its Gemini AI chatbot soon for children below the age of 13 with parent-managed Google accounts. The move comes as tech companies try to attract young users with AI tools. According to a mail sent to a parent of an 8-year-old, Google apps will soon be available to a child. It means your child can use Gemini to ask questions, get homework help, and also create stories. 

That chatbot will be available to children whose guardians have Family Link, a Google feature that allows families to make Gmail and opt-in services like YouTube for their children. To register a child account, the parent gives the tech company the child’s personal information such as name and date of birth. 

According to Google spokesperson Karl Ryan, Gemini has concrete measures for younger users to restrict the chatbot from creating unsafe or harmful content. If a child with a Family Link account uses Gemini, the company can not use the data for training its AI model. 

Gemini for children can drive the use of chatbots among vulnerable populations as companies, colleges, schools, and others struggle with the effects of popular gen AI tech. The systems are trained on massive amounts of data sets to create human-like text and realistic images and videos. Google and other AI chatbot developers are battling fierce competition to get young users’ attention. 

Recently, President Donald Trump requested schools to embrace tools for teaching and learning. Millions of teens are already using chatbots for study help, virtual companions, and writing coaches. Experts have warned that chatbots could pose serious threats to child safety. 

The bots are known to sometimes make things up. UNICEF and other children's advocacy groups have found that AI systems can misinform, manipulate, and confuse young children who may face difficulties understanding that the chatbots are not humans. 

According to UNICEF’s global research office, “Generative AI has produced dangerous content,” posing risks for children. Google has acknowledged some risks, cautioning parents that “Gemini can make mistakes” and suggesting they “help your child think critically” about the chatbot. 

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

AI Now Writes Up to 30% of Microsoft’s Code, Says CEO Satya Nadella

 

Artificial intelligence is rapidly reshaping software development at major tech companies, with Microsoft CEO Satya Nadella revealing that between 20% and 30% of code in the company’s repositories is currently generated by AI tools. 

Speaking during a fireside chat with Meta CEO Mark Zuckerberg at Meta’s LlamaCon conference, Nadella shed light on how AI is becoming a core contributor to Microsoft’s development workflows. He noted that Microsoft is increasingly relying on AI not just for coding but also for quality assurance. 

“The agents we have for reviewing code; that usage has increased,” Nadella said, adding that the performance of AI-generated code differs depending on the programming language. While Python showed strong results, C++ remained a challenge. “C Sharp is pretty good but C++ is not that great. Python is fantastic,” he noted. 

When asked about the role of AI in Meta’s software development, Zuckerberg did not provide a specific figure but shared that the company is prioritizing AI-driven engineering to support the development of its Llama models. 

“Our bet is that probably half the development is done by AI as opposed to people and that will just kind of increase from there,” Zuckerberg said. 

Microsoft’s Chief Technology Officer Kevin Scott has previously projected that AI will be responsible for generating 95% of all code within the next five years. Speaking on the 20VC podcast, Scott emphasized that human developers will still play a vital role. 

“Very little is going to be — line by line — human-written code,” he said, but added that AI will “raise everyone’s level,” making it easier for non-experts to create functional software. The comments from two of tech’s biggest leaders point to a future where AI not only augments but significantly drives software creation, making development faster, more accessible, and increasingly automated.

Do Not Charge Your Phone at Public Stations, Experts Warn

Do Not Charge Your Phone at Public Stations, Experts Warn

For a long time, smartphones have had a built-in feature that saves us against unauthorized access through USB. In Android and iOS, pop-ups ask us to confirm access before a data USB connection is established to transfer our data. 

But this defense is not enough to protect against “juice-jacking” — a hacking technique that manipulates charging stations to install malicious code, steal data, or enable access to the device while plugged in. Experts have found a severe flaw in this system that hackers can exploit easily. 

Cybersecurity researchers have discovered a serious loophole in this system that can be easily exploited. 

Hackers using new technique to hack smartphones via USB

According to experts, hackers can now use a new method called “choice jacking” to make sure that access to smartphones is easily verified without the user realizing it. 

First, the hackers deploy a feature on a charging station so that it looks like a USB keyboard when connected. After that, through USB Power Delivery, it runs a “USB PD Data Role Swap” to make a Bluetooth connection, activating the file transfer consent pop-up, and approving permission while acting as a Bluetooth keyboard. 

The hackers leverage the charging station to evade the protection mechanism on the device, which is aimed at protecting users against hacking attacks with USB peripherals. This can become a serious issue if the hacker gets access to all files and personal data stored on our smartphones to hack accounts. 

Experts at Graz University of Technology tried this technique on devices from a lot of manufacturers such as Samsung, which sells the second most smartphones besides Apple. All tested smartphones allowed the researchers to transfer data during the duration the screen was unlocked. 

No solution to this problem

Despite smartphone manufacturers being aware of the problem, there are not enough safety measures against juice-jacking, Only Google and Apply have implemented a solution, which requires users first to provide their PIN or password before they can use a device as authorized start and begin the data transfer. But, other manufacturers have not come up with efficient solutions to address this issue and offer protection.

If your smartphone has USB debugging enabled, it can be dangerous as USB debugging allows hackers to get access to the device via the Android Debug Bridge and deploy their own apps, run files, and generally use a higher access mode. 

How to be safe?

The easiest way users can protect themselves from juice-jacking attacks through USB charging stations is to never use a public charging station. Users should always avoid charging stations in busy areas such as airports and malls, they are the most dangerous. 

Users are advised to carry their power banks when traveling and always keep their smartphones updated.

Microsoft Launches Recall AI for Windows 11 Copilot+ PCs with Enhanced Privacy Measures

 

After months of delays stemming from privacy and security concerns, Microsoft has officially rolled out its Recall AI feature for users of Windows 11 Copilot+ PCs. The feature, which has now exited its beta phase, is included in the latest Windows update. Recall AI enables users to search their on-screen activity by automatically taking screenshots and storing them—along with any extracted text—in a locally encrypted and searchable database. This makes it easier for users to find and revisit previous interactions, such as documents, applications, or web pages, using natural language search. 

Originally introduced in May 2024, Recall AI faced widespread criticism due to concerns around user privacy and the potential for misuse. Microsoft delayed its public launch several times, including a planned release in October 2024, to address these issues and gather feedback from Windows Insider testers. 

In its revised version, Microsoft has made Recall AI an opt-in tool with built-in privacy protections. All data remains on the user’s device, with no transmission to Microsoft servers or third parties. Features such as Windows Hello authentication, full local encryption, and user control over data storage have been added to reinforce security. Microsoft assures users they can completely remove the feature at any time, although temporary system files may persist briefly before being permanently deleted. 

For enterprise users with an active Microsoft 365 E3 subscription, the company offers advanced administrative controls. These allow IT departments to set access permissions and manage security policies related to the use of Recall AI in workplace environments. Alongside Recall AI, Microsoft has also launched two additional features tailored to Copilot+ PCs. 

The improved Windows search function now interprets user queries more contextually and processes them using the device’s neural processing unit for faster and smarter results. Meanwhile, the Click to Do feature provides context-sensitive shortcuts, making tasks like copying or summarising text and images more efficient. In separate developments, Microsoft continues to advance its position in quantum computing.

Earlier this year, the company unveiled Majorana 1, a quantum chip based on a novel Topological Core architecture. According to Microsoft, this breakthrough has the potential to significantly accelerate solutions to industrial-scale problems using quantum technology.

Microsoft Alerts Users About Password-spraying Attack

Microsoft Alerts Users About Password-spraying Attack

Microsoft alerts users about password-spraying attacks

Microsoft has warned users about a new password-spraying attack by a hacking group Storm-1977 that targets cloud users. The Microsoft Threat Intelligence team reported a new warning after discovering threat actors are abusing unsecured workload identities to access restricted resources. 

According to Microsoft, “Container technology has become essential for modern application development and deployment. It's a critical component for over 90% of cloud-native organizations, facilitating swift, reliable, and flexible processes that drive digital transformation.” 

Hackers use adoption-as-a-service

Research says 51% of such workload identities have been inactive for one year, which is why attackers are exploiting this attack surface. The report highlights the “adoption of containers-as-a-service among organizations rises.” According to Microsoft, it continues to look out for unique security dangers that affect “containerized environments.” 

The password-spraying attack targeted a command line interface tool “AzureChecker” to download AES-encrypted data which revealed the list of password-spray targets after it was decoded. To make things worse, the “threat actor then used the information from both files and posted the credentials to the target tenants for validation.”

The attack allowed the Storm-1977 hackers to leverage a guest account to make a compromised subscription resource group and over 200 containers that were used for crypto mining. 

Mitigating password-spraying attacks

The solution to the problem of password spraying attacks is eliminating passwords. It can be done by moving towards passkeys, a lot of people are already doing that. 

Microsoft has suggested these steps to mitigate the issue

  • Use strong authentication while putting sensitive interfaces to the internet. 
  • Use strong verification methods for the Kubernetes API to stop hackers from getting access to the cluster even when valid credentials like kubeconfig are obtained.  
  • Don’t use the read-only endpoint of Kubelet on port 10255, which doesn’t need verification. 

Modify the Kubernetes role-based access controls for every user and service account to only retain permissions that are required. 

According to Microsoft, “Recent updates to Microsoft Defender for Cloud enhance its container security capabilities from development to runtime. Defender for Cloud now offers enhanced discovery, providing agentless visibility into Kubernetes environments, tracking containers, pods, and applications.” These updates upgrade security via continuous granular scanning. 

Now You Can Hire AI Tools Like Freelancers — Thanks to This Indian Startup

 



A tech startup based in Ahmedabad is changing how businesses use artificial intelligence. The company has launched a platform that allows users to hire AI tools the same way they hire freelancers— on demand and for specific tasks.

Over the past few years, companies everywhere have turned to AI to speed up their work, reduce costs, and make smarter decisions. But finding the right AI tool has become a tough task. With hundreds of platforms available online, most users—especially those without a technical background— don’t know where to start. Many tools are expensive, difficult to use, or don’t work as expected.

That’s where ActionAgents, a platform by ActionLabs.ai, comes in. The idea behind the platform began when the team noticed that many of their business clients kept asking which AI tool to use for particular needs. There was no clear or reliable place to compare different tools and test them first.

At first, they created a directory that listed a wide range of AI tools from different sectors. But it didn’t solve the full problem. Users still had to leave the site, sign up for external tools, and often pay for something that didn’t meet their expectations. This made it harder for small businesses and non-technical users to benefit from AI.

To solve this, the team launched ActionAgents in January. It is a single platform that brings various AI tools together and lets users access them directly. There’s no need to subscribe or download anything. Users can try out different AI agents and only pay when they use a service.

The platform currently offers over 50 AI-powered mini tools. These include tools for writing resumes and cover letters, checking job applications against hiring systems, generating business names, planning trips, finding gifts, building websites, and even analyzing WhatsApp chats.

In just two months, more than 3,000 people have signed up. Every day, about 80–100 new users join, and over 200 tasks are completed by the AI agents. What’s more impressive is that the startup has done all this without spending money on advertising. People from countries like India, the US, Canada, and those in Europe and the Middle East are using the platform.

The startup started with an investment of ₹15–20 lakh and is already seeing steady growth in users and revenue. Now, ActionAgents plans to reach 10,000 users in the next few months. Over the next two years, it aims to grow its user base to around 1 million.

The team also wants to open the platform to developers, allowing them to build their own AI tools and offer them on ActionAgents. This move could help more people build, sell, and earn from their own AI creations.


From a Small Home to a Big AI Dream

The person who started ActionAgents, Jay, didn’t come from a rich background. He grew up in Ahmedabad, where his family worked very hard to earn a living. His father drove a rickshaw and often worked extra hours to support them. His mother stitched clothes for a living and also taught other women how to sew, so they could earn money too.

Even though they didn’t have much money, Jay’s parents always believed that education was important. They wanted him to study in an English-medium school, even when relatives made fun of them for spending money on it. They hoped a good education would give him better chances in life.

That decision made a big difference. Today, Jay is building a powerful AI platform from scratch, without taking any money from investors. He started small, but now he’s working to make AI tools easy and affordable for everyone, whether they are tech-savvy or not.

He is not doing it alone. A young and talented team is helping him bring this idea to life. People like Jash Jasani, Dev Patel, Deepali, and many others are part of the ActionAgents team. Together, they are working on building smart solutions that can help businesses and individuals with simple tasks using AI.

Their goal is to change how people use technology in daily work by making it easier, quicker, and more helpful. From a small beginning, they are now working towards a big vision: to shape the future of how people work with the help of AI.