Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Delta Airline is Using AI to Set Ticket Prices

 

With major ramifications for passengers, airlines are increasingly using artificial intelligence to determine ticket prices. Now, simple actions like allowing browser cookies, accepting website agreements, or enrolling into loyalty programs can influence a flight's price. The move to AI-driven pricing brings up significant challenges of equity, privacy, and the possibility of increased travel costs. 

Recently, Delta Air Lines revealed that the Israeli startup Fetcherr's AI technology is used to determine about 3% of its domestic ticket rates. To generate customised offers, this system analyses a number of variables, such as past purchasing patterns, user lifetime value, and the current context of each booking query. The airline plans to raise AI-based pricing to 20% of tickets by the end of 2025, according to Delta President Glen Hauenstein, who also emphasised the favourable revenue impact. 

Regulatory issues

US lawmakers have questioned the use of AI pricing models, fearing that it may result in increased fares and unfair disadvantages for some customers. The public's response has been mixed; some passengers are concerned about customised pricing schemes that could make air travel less transparent and affordable. 

In order to adopt dynamic, data-driven pricing strategies, other airlines are doing the same by investing in AI knowledge and creating machine learning solutions. Although this tendency welcomes increased regulatory scrutiny, it also signifies a larger transition within the industry. In an effort to strike a balance between innovation and justice, authorities are looking more closely at how AI technologies impact consumer rights and market competition. 

In Canada, airlines such as Porter recognise the use of dynamic pricing and the integration of AI in some operational areas, but they do not yet use AI for personalised ticket pricing. Canadian consumers benefit from enhanced privacy safeguards under the Personal Information Protection and Electronic Documents Act (PIPEDA), which requires firms to get "meaningful consent" before collecting, processing, or sharing personal data. 

Nevertheless, experts caution that PIPEDA is out of date and does not completely handle the complications posed by AI-driven pricing. Terry Cutler, a Canadian information security consultant, notes that, while certain safeguards exist, significant ambiguities persist, particularly when data is used in unexpected ways, such as changing prices based on browsing histories or device types. 

Implications for passengers 

As airlines accelerate the introduction of AI-powered pricing, passengers should be cautious about how their personal information is used. With regulatory frameworks trying to keep up with rapid technology innovations, customers must navigate an ever-evolving sector that frequently lacks transparency. Understanding these dynamics is critical for maintaining privacy and making informed judgements in the age of AI-powered air travel pricing.

Alaska Airlines Grounds All Flights Amid System-Wide IT Outage, Passengers Face Major Disruptions

 


Alaska Airlines was forced to implement a full nationwide ground stop for its mainline and Horizon Air flights on Sunday evening due to a significant IT system outage. The disruption began around 8 p.m. Pacific Time and affected the entire network of the Seattle-based airline, as reflected on the Federal Aviation Administration’s (FAA) status dashboard.

The technical failure led to a temporary suspension of flight operations, causing widespread delays and cancellations across multiple airports in the U.S. Seattle-Tacoma International Airport, Alaska Airlines’ central hub, was among the worst affected. As reported by The Economic Times, hundreds of flights were either grounded or delayed, leaving travelers stranded and uncertain about their schedules.

In a statement to CBS News, the airline confirmed it had "experienced an IT outage that's impacting our operations" and had issued "a temporary, system-wide ground stop for Alaska and Horizon Air flights until the issue is resolved". The airline also warned of lingering operational impacts, noting passengers should expect ongoing disruptions into the night.

While the root cause of the outage has not been revealed, Alaska Airlines assured that its technical teams were actively working to restore normal service. A message posted on the carrier’s website stated: "We are experiencing issues with our IT systems. We apologize for the inconvenience and are working to resolve the issues."

As reported by KIRO 7, some services began resuming gradually by late Sunday. However, neither the airline nor the FAA provided a clear timeline for when full service would be reinstated.

Passengers took to social media to voice frustration over long wait times, non-functional customer support apps, and confusion at airport gates. Meanwhile, Portland International Airport reported minimal delays, with only a few Alaska Airlines flights affected by 9:15 p.m., according to KATU.

This outage is among the most significant operational disruptions for Alaska Airlines in recent years. The last major glitch of this scale occurred in 2022, resulting in widespread delays due to a similar system malfunction.

As of now, Alaska Airlines has advised passengers to regularly check their flight status before heading to the airport and has not confirmed any details regarding rebooking assistance or compensation. The airline continues to be a major player in both domestic and international air travel, especially across the U.S. West Coast.

Stop! Don’t Let That AI App Spy on Your Inbox, Photos, and Calls

 



Artificial intelligence is now part of almost everything we use — from the apps on your phone to voice assistants and even touchscreen menus at restaurants. What once felt futuristic is quickly becoming everyday reality. But as AI gets more involved in our lives, it’s also starting to ask for more access to our private information, and that should raise concerns.

Many AI-powered tools today request broad permissions, sometimes more than they truly need to function. These requests often include access to your email, contacts, calendar, messages, or even files and photos stored on your device. While the goal may be to help you save time, the trade-off could be your privacy.

This situation is similar to how people once questioned why simple mobile apps like flashlight or calculator apps — needed access to personal data such as location or contact lists. The reason? That information could be sold or used for profit. Now, some AI tools are taking the same route, asking for access to highly personal data to improve their systems or provide services.

One example is a new web browser powered by AI. It allows users to search, summarize emails, and manage calendars. But in exchange, it asks for a wide range of permissions like sending emails on your behalf, viewing your saved contacts, reading your calendar events, and sometimes even seeing employee directories at workplaces. While companies claim this data is stored locally and not misused, giving such broad access still carries serious risks.

Other AI apps promise to take notes during calls or schedule appointments. But to do this, they often request live access to your phone conversations, calendar, contacts, and browsing history. Some even go as far as reading photos on your device that haven’t been uploaded yet. That’s a lot of personal information for one assistant to manage.

Experts warn that these apps are capable of acting independently on your behalf, which means you must trust them not just to store your data safely but also to use it responsibly. The issue is, AI can make mistakes and when that happens, real humans at these companies might look through your private information to figure out what went wrong.

So before granting an AI app permission to access your digital life, ask yourself: is the convenience really worth it? Giving these tools full access is like handing over a digital copy of your entire personal history, and once it’s done, there’s no taking it back.

Always read permission requests carefully. If an app asks for more than it needs, it’s okay to say no.

Why Policy-Driven Cryptography Matters in the AI Era

 



In this modern-day digital world, companies are under constant pressure to keep their networks secure. Traditionally, encryption systems were deeply built into applications and devices, making them hard to change or update. When a flaw was found, either in the encryption method itself or because hackers became smarter, fixing it took time, effort, and risk. Most companies chose to live with the risk because they didn’t have an easy way to fix the problem or even fully understand where it existed.

Now, with data moving across various platforms, for instance cloud servers, edge devices, and personal gadgets — it’s no longer practical to depend on rigid security setups. Businesses need flexible systems that can quickly respond to new threats, government rules, and technological changes.

According to the IBM X‑Force 2025 Threat Intelligence Index, nearly one-third (30 %) of all intrusions in 2024 began with valid account credential abuse, making identity theft a top pathway for attackers.

This is where policy-driven cryptography comes in.


What Is Policy-Driven Crypto Agility?

It means building systems where encryption tools and rules can be easily updated or swapped out based on pre-defined policies, rather than making changes manually in every application or device. Think of it like setting rules in a central dashboard: when updates are needed, the changes apply across the network with a few clicks.

This method helps businesses react quickly to new security threats without affecting ongoing services. It also supports easier compliance with laws like GDPR, HIPAA, or PCI DSS, as rules can be built directly into the system and leave behind an audit trail for review.


Why Is This Important Today?

Artificial intelligence is making cyber threats more powerful. AI tools can now scan massive amounts of encrypted data, detect patterns, and even speed up the process of cracking codes. At the same time, quantum computing; a new kind of computing still in development, may soon be able to break the encryption methods we rely on today.

If organizations start preparing now by using policy-based encryption systems, they’ll be better positioned to add future-proof encryption methods like post-quantum cryptography without having to rebuild everything from scratch.


How Can Organizations Start?

To make this work, businesses need a strong key management system: one that handles the creation, rotation, and deactivation of encryption keys. On top of that, there must be a smart control layer that reads the rules (policies) and makes changes across the network automatically.

Policies should reflect real needs, such as what kind of data is being protected, where it’s going, and what device is using it. Teams across IT, security, and compliance must work together to keep these rules updated. Developers and staff should also be trained to understand how the system works.

As more companies shift toward cloud-based networks and edge computing, policy-driven cryptography offers a smarter, faster, and safer way to manage security. It reduces the chance of human error, keeps up with fast-moving threats, and ensures compliance with strict data regulations.

In a time when hackers use AI and quantum computing is fast approaching, flexible and policy-based encryption may be the key to keeping tomorrow’s networks safe.

Britons Risk Privacy by Sharing Sensitive Data with AI Chatbots Despite Security Concerns

 

Nearly one in three individuals in the UK admits to sharing confidential personal details with AI chatbots, such as OpenAI’s ChatGPT, according to new research by cybersecurity firm NymVPN. The study reveals that 30% of Britons have disclosed sensitive data—including banking information and health records—to AI tools, potentially endangering their own privacy and that of others.

Despite 48% of respondents expressing concerns over the safety of AI chatbots, many continue to reveal private details. This habit extends to professional settings, where employees are reportedly sharing internal company and customer information with these platforms.

The findings come amid a wave of high-profile cyberattacks, including the recent breach at Marks & Spencer, which underscores how easily confidential data can be compromised. NymVPN reports that 26% of survey participants have entered financial details related to salaries, mortgages, and investments, while 18% have exposed credit card or bank account numbers. Additionally, 24% acknowledged sharing customer data—such as names and email addresses—and 16% uploaded company financial records and contracts.

“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.

Organizations such as M&S, Co-op, and Adidas have already made headlines for data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” Halpin added.

With nearly a quarter of people admitting to sharing customer data with AI tools, experts emphasize the urgent need for businesses to establish strict policies governing AI usage at work.

“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” Halpin warned.

Completely avoiding AI chatbots might be the safest option, but it’s not always realistic. Users are advised to refrain from entering sensitive information, adjust privacy settings by disabling chat history, or opt out of model training.

Using a VPN can provide an additional layer of online privacy by encrypting internet traffic and masking IP addresses when accessing AI chatbots like ChatGPT. However, even with a VPN, risks remain if individuals continue to input confidential data.

How Tech Democratization Is Helping SMBs Tackle 2025’s Toughest Challenges

 

Small and medium-sized businesses (SMBs) are entering 2025 grappling with familiar hurdles: tight budgets, economic uncertainty, talent shortages, and limited cybersecurity resources. A survey of 300 decision-makers highlights how these challenges are pushing SMBs to seek smarter, more affordable tech solutions.

Technology itself ranks high on the list of SMB pain points. A 2023 Mastercard report (via Digital Commerce 360) showed that two-thirds of small-business owners saw seamless digital experiences as critical—but 25% were overwhelmed by the cost and complexity. The World Economic Forum's 2025 report echoed this, noting that SMBs are often “left behind” when it comes to transformative tech.

That’s changing fast. As enterprise-grade tools become more accessible, SMBs now have affordable, powerful options to bridge the tech gap and compete effectively.

1. Stronger, Smarter Networks
Downtime is expensive—up to $427/minute, says Pingdom. SMBs now have access to fast, reliable fiber internet with backup connections that kick in automatically. These networks support AI tools, cloud apps, IoT, and more—while offering secure, segmented Wi-Fi for teams, guests, and devices.

Case in point: Albemarle, North Carolina, deployed fiber internet with a cloud-based backup, ensuring critical systems stay online 24/7.

2. Cybersecurity That Fits the SMB Budget
Cyberattacks hit 81% of small businesses in the past year (Identity Theft Resource Center, 2024). Yet under half feel ready to respond, and many hesitate to invest due to cost. The good news: built-in firewalls, multifactor authentication, and scalable security layers are now more affordable than ever.

As Checker.ai founder Anup Kayastha told StartupNation, the company started with MFA and scaled security as they grew.

3. Big Brand Experiences, Small Biz Budgets
SMBs now have the digital tools to deliver seamless, omnichannel customer experiences—just like larger players. High-performance networks and cloud-based apps enable rich e-commerce journeys and AI-driven support that build brand presence and loyalty.

4. Predictable Pricing, Maximum Value
Tech no longer requires deep pockets. Today’s solutions bundle high-speed internet, cybersecurity, compliance, and productivity tools—often with self-service options to reduce IT overhead.

5. Built-In Tech Support
Forget costly consultants. Many SMB-friendly providers now offer local, on-site support as part of their packages—helping small businesses install, manage, and maintain systems with ease.

Here's How Everyday Tech Is Being Weaponized to Deploy Trojan

 

The technology that facilitates your daily life, from the smartphone in your hand to the car in your garage, may simultaneously be detrimental to you. Once the stuff of spy thrillers, consumer electronics can today be used as tools of control, tracking, or even warfare if they are manufactured in adversarial countries or linked to opaque systems. 

Mandatory usage and dependence on technology in all facets of our lives has led to risks and vulnerabilities that are no longer hypothetical. In addition to being found in your appliances, phone, internet, electricity, and other utility services, connected technology is also integrated in your firmware, transmitted through your cloud services, and magnified over your social media feeds. 

China's dominance in electronics manufacturing, which gives it enormous influence over the global tech supply chain, is a major cause for concern. Malware has been found pre-installed on electronic equipment exported from Chinese manufacturing. These flaws are frequently built into the hardware and cannot be fixed with a simple update. 

These risks are genuine and cause for concern, according to former NSA director Mike Rogers: We know that China sees value in putting at least some of our key infrastructure at risk of disruption or destruction. I believe that the Chinese are partially hoping that the West's options for handling the security issue will be limited due to the widespread use of inverters.

A new level of complexity is introduced by autonomous cars. These rolling data centres have sensors, cameras, GPS tracking, and cloud connectivity, allowing for remote monitoring and deactivation. Physical safety and national infrastructure are at risk if parts or software come from unreliable sources. Even seemingly innocuous gadgets like fitness trackers, smart TVs, and baby monitors might have security flaws. 

They continuously gather and send data, frequently with little security or user supervision. The Electronic Privacy Information Center's counsel, Suzanne Bernstein, stated that "HIPAA does not apply to health data collected by many wearable devices and health and wellness apps.”

The message is clear: even low-tech tools can become high-risk in a tech-driven environment. Foreign intelligence services do not need to sneak agents into enemy territory; they simply require access to the software supply chain. Malware campaigns such as China's APT41 and Russia's NotPetya demonstrate how compromised consumer and business software can be used for espionage and sabotage. Worse, these attacks are sometimes unnoticed for months or years before being activated—either during conflict or at times of strategic strain.

China Hacks Seized Phones Using Advanced Forensics Tool

 


There has been a significant concern raised regarding digital privacy and the practices of state surveillance as a result of an investigation conducted by mobile security firm Lookout. Police departments across China are using a sophisticated surveillance system, raising serious concerns about the state's surveillance policies. 

According to Chinese cybersecurity and surveillance technology company Xiamen Meiya Pico, Massistant, the system is referred to as Massistant. It has been reported that Lookout's analysis indicates that Massistant is geared toward extracting a lot of sensitive data from confiscated smartphones, which could help authorities perform comprehensive digital forensics on the seized devices. This advanced software can be used to retrieve a broad range of information, including private messages, call records, contact lists, media files, GPS locations, audio records, and even encrypted messages from secure messaging applications like Signal. 

A notable leap in surveillance capabilities has been demonstrated by this system, as it has been able to access protected platforms which were once considered secure, potentially bypassing encryption safeguards that were once considered secure. This discovery indicates the increasing state control over personal data in China, and it underscores how increasingly intrusive digital tools are being used to support law enforcement operations within the country. 

With the advent of sophisticated and widespread technologies such as these, there will be an increasing need for human rights protection, privacy protection, and oversight on the global stage as they become more sophisticated. It has been reported that Chinese law enforcement agencies are using a powerful mobile forensic tool known as Massistant to extract sensitive information from confiscated smartphones, a powerful mobile forensic tool known as Massistant. 

In the history of digital surveillance, Massistant represents a significant advance in digital surveillance technology. Massistant was developed by SDIC Intelligence Xiamen Information Co., Ltd., which was previously known as Meiya Pico. To use this tool, authorities can gain direct access to a wide range of personal data stored on mobile devices, such as SMS messages, call histories, contact lists, GPS location records, multimedia files and audio recordings, as well as messages from encrypted messaging apps like Signal, to the data. 

A report by Lookout, a mobile security firm, states that Massistant is a desktop-based forensic analysis tool designed to work in conjunction with Massistant, creating a comprehensive system of obtaining digital evidence, in combination with desktop-based forensic analysis software. In order to install and operate the tool, the device must be physically accessed—usually during security checkpoints, border crossings, or police inspections on the spot. 

When deployed, the system allows officials to conduct a detailed examination of the contents of the phone, bypassing conventional privacy protections and encryption protocols in order to examine the contents in detail. In the absence of transparent oversight, the emergence of these tools illustrates the growing sophistication of state surveillance capabilities and raises serious concerns over user privacy, data security, and the possibility of abuse. 

The further investigation of Massistant revealed that the deployment and functionality of the system are closely related to the efforts that Chinese authorities are putting into increasing digital surveillance by using hardware and software tools. It has been reported that Kristina Balaam, a Lookout security researcher, has discovered that the tool's developer, Meiya Pico, currently operating under the name SDIC Intelligence Xiamen Information Co., Ltd., maintains active partnerships with domestic and foreign law enforcement agencies alike. 

In addition to product development, these collaborations extend to specialised training programs designed to help law enforcement personnel become proficient in advanced technical surveillance techniques. According to the research conducted by Lookout, which included analysing multiple Massistant samples collected between mid-2019 and early 2023, the tool is directly related to Meiya Pico as a signatory certificate referencing the company can be found in the tool. 

For Massistant to work, it requires direct access to a smartphone - usually a smartphone during border inspections or police encounters - to facilitate its installation. In addition, once the tool has been installed, it is integrated with a desktop forensics platform, enabling investigators to extract large amounts of sensitive user information using a systematic approach. In addition to text messages, contact information, and location history, secure communication platforms provide protected content, as well. 

As its predecessor, MFSocket, Massistant is a program that connects mobile devices to desktops in order to extract data from them. Upon activation, the application prompts the user to grant the necessary permissions to access private data held by the mobile device. Despite the fact that the device owner does not require any further interaction once the initial authorisation is complete, the application does not require any further interaction once it has been launched. 

Upon closing the application, the user is presented with a warning indicating that the software is in the “get data” mode and that exiting will result in an error, and this message is available only in Simplified Chinese and American English, indicating the application’s dual-target audience. In addition, Massistant has introduced several new enhancements over MFSocket, namely the ability to connect to users' Android device using the Android Debug Bridge (ADB) over WiFi, so they can engage wirelessly and access additional data without having to use direct cable connections. 

In addition to the application's ability to remain undetected, it is also designed to automatically uninstall itself once users disconnect their USB cable, so that no trace of the surveillance operation remains. It is evident that these capabilities position Massistant as a powerful weapon in the arsenal of government-controlled digital forensics and surveillance tools, underlining growing concerns about privacy violations and a lack of transparency when it comes to the deployment of such tools.

Kristina Balaam, a security researcher, notes that despite Massistant's intrusive capabilities that it does not operate in complete stealth, so users have a good chance of detecting and removing it from compromised computers, even though it is invasive. It's important to know that the tool can appear on users' phone as a visible application, which can alert them to the presence of this application. 

Alternatively, technically proficient individuals could identify and remove the application using advanced utilities such as Android Debug Bridge (ADB), which enables direct communication between users' smartphone and their computer by providing a command-line interface. According to Balaam, it is important to note that the data exfiltration process can be almost complete by the time Massistant is installed, which means authorities may already have accessed and extracted all important personal information from the device by the time Massistant is installed. 

Xiamen Meiya Pico's MSSocket mobile forensics tool, which was also developed by the company Xiamen Meiya Pico, was the subject of cybersecurity scrutiny a couple of years ago, and Massistant was regarded as a successor tool by the company in 2019. In developing surveillance solutions tailored for forensic investigations, the evolution from MSSocket to Massistant demonstrates the company's continued innovation. 

Xiamen Meiya Pico, according to industry data, controls around 40 per cent of the Chinese digital forensics market, demonstrating its position as the market leader in the provision of data extraction technologies to law enforcement. However, this company is not to be overlooked internationally as its activities have not gone unnoticed. For the first time in 2021, the U.S. government imposed sanctions against Meiya Pico, allegedly supplying surveillance tools to Chinese authorities. 

It has been reported that these surveillance tools have been used in ways that are causing serious human rights and privacy violations. Despite the fact that media outlets, including TechCrunch, have inquired about the company's role in mass instant development and distribution, it has declined to respond to these inquiries. 

It was Balaam who pointed out that Massistant is just a tiny portion of a much larger and more rapidly growing ecosystem of surveillance software developed by Chinese companies. At the moment, Lookout is tracking over fifteen distinct families of spyware and malware that originated from China. Many of these programs are thought to be specifically designed for state surveillance and digital forensics purposes. 

Having seen this trend in action, it is apparent that the surveillance industry is both large and mature in the region, which exacerbates global concerns regarding unchecked data collection and misuse of intrusive technologies. A critical inflexion point has been reached in the global conversation surrounding privacy, state surveillance, and digital autonomy, because tools like Massistant are becoming increasingly common. 

Mobile forensic technology has become increasingly powerful and accessible to government entities, which has led to an alarming blurring of the lines between lawful investigation and invasive overreach. Not only does this trend threaten individual privacy rights, but it also threatens to undermine trust in the digital ecosystem when transparency and accountability are lacking, especially when they are lacking in both. 

Consequently, it highlights the urgency of adopting stronger device security practices for individuals, staying informed about the risks associated with physical device access, and advocating for encrypted platforms that are resistant to unauthorized exploits, as well as advocating for stronger security practices for individuals. 

For policymakers and technology companies around the world, the report highlights the imperative need to develop and enforce robust regulatory frameworks that govern the ethical use of surveillance tools, both domestically and internationally. It is important to keep in mind that if these technologies are not regulated and monitored adequately, then they may set a dangerous precedent, enabling abuses that extend much beyond their intended scope. 

The Massistant case serves as a powerful reminder that the protection of digital rights is a central component of modern governance and civic responsibility in an age defined by data.