Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Some ChatGPT Browser Extensions Are Putting User Accounts at Risk

  Cybersecurity researchers are cautioning users against installing certain browser extensions that claim to improve ChatGPT functionality, ...

All the recent news you need to know

Threat Actors Target Misconfigured Proxies for Paid LLM Access

 

GreyNoise, a cybersecurity company, has discovered two campaigns against the infrastructure of large language models (LLMs) where the attackers used misconfigured proxies to gain illicit access to commercial AI services. Starting late December 2025, the attackers scanned over 73 LLM endpoints and had more than 80,000 sessions in 11 days, using harmless queries to evade detection. These efforts highlight the growing threat to AI systems as attackers begin to map vulnerable systems for potential exploitation. 

The first campaign, which started in October 2025, focused on server-side request forgery (SSRF) vulnerabilities in Ollama honeypots, resulting in a cumulative 91,403 attack sessions. The attackers used malicious registry URLs via Ollama’s model pull functionality and manipulated Twilio SMS webhooks to trigger outbound connections to their own infrastructure. A significant spike during Christmas resulted in 1,688 sessions over 48 hours from 62 IP addresses in 27 countries, using ProjectDiscovery’s OAST tools, indicating the involvement of grey-hat researchers rather than full-fledged malware attacks.

The second campaign began on December 28 from IP addresses 45.88.186.70 and 204.76.203.125. This campaign systematically scanned endpoints that supported OpenAI and Google Gemini API formats. The targets included leading providers such as OpenAI’s GPT-4o, Anthropic’s Claude series, Meta’s Llama 3.x, Google’s Gemini, Mistral, Google’s Gemini, Alibaba’s Qwen, Alibaba’s DeepSeek-R1, and xAI’s Grok. The attackers used low-noise queries like basic greetings or factual questions like “How many states in the US?” to identify models while avoiding detection systems. 

GreyNoise links the scanning IPs to prior CVE exploits, including CVE-2025-55182, indicating professional reconnaissance rather than casual probing.While no immediate exploitation or data theft was observed, the scale signals preparation for abuse, like free-riding on paid APIs or injecting malicious prompts. "Threat actors don't map infrastructure at this scale without plans to use that map," the report warns.

Organizations should restrict Ollama pulls to trusted registries, implement egress filtering, and block OAST domains like *.oast.live at DNS. Additional defenses include rate-limiting suspicious ASNs (e.g., AS210558, AS51396), monitoring JA4 fingerprints, and alerting on multi-endpoint probes. As AI surfaces expand, proactive securing of proxies and APIs is crucial to thwart these evolving threats.

Cybercriminals Report Monetizing Stolen Data From US Medical Company


Modern healthcare operations are frequently plagued by ransomware attacks, but the recent attack on Change Healthcare marks a major turning point in terms of scale and consequence. In the context of an industry that is increasingly relying on digital platforms, there is a growing threat environment characterized by organized cybercrime, fragile third-party dependency, and an increasing data footprint as a result of an increasingly hostile threat environment. 

With hundreds of ransomware incidents and broader security incidents already occurring in a matter of months, recent figures from 2025 illustrate just how serious this shift is. It is important to note that a breach will not only disrupt clinical and administrative workflows, but also put highly sensitive patient information at risk, which can result in cascading operational, financial, and legal consequences for organizations. 

The developments highlighted here highlight a stark reality: safeguarding healthcare data does not just require technical safeguards; it now requires a coordinated risk management strategy that anticipates breaches, limits their impacts, and ensures institutional resilience should prevention fail. 

Connecticut's Community Health Center (CHC) recently disclosed a significant data breach that occurred when an unauthorized access to its internal systems was allowed to result in a significant data breach, which exemplifies the sector's ongoing vulnerability to cyber risk. 

In January 2025, the organization was alerted to irregular network activity, resulting in an urgent forensic investigation that confirmed there was a criminal on site. Upon further analysis, it was found that the attacker had maintained undetected access to the system from mid-October 2024, thereby allowing a longer window for data exfiltration before the breach was contained and publicly disclosed later that month. 

There was no ransomware or disruption of operations during the incident, but the extent of the data accessed was significant, including names, dates of birth, Social Security numbers, health insurance details, and clinical records of patients and employees, which included sensitive patient and employee information.

More than one million people, including several thousand employees, were affected according to CHC, demonstrating the difficulties that persist in early detection of threats and data protection across healthcare networks, and highlighting the urgent need for strengthened security measures as medical records continue to attract cybercriminals. 

According to Cytek Biosciences' notification to affected individuals, it was learned in early November 2025 that an outside party had gained access to portions of the Biotechnology company's systems and that the company later determined that personal information had been obtained by an outside party. 

As soon as the company became aware of the extent of the exposure, it took immediate steps to respond, including offering free identity theft protection and credit monitoring services for up to two years to eligible individuals, which the company said it had been working on. 

As part of efforts to mitigate potential harm resulting from the incident, enrollment in the program continues to be open up until the end of April 2026. Threat intelligence sources have identified the breach as being connected to Rhysida, which is known for being a ransomware group that first emerged in 2023 and has since established itself as a prolific operation within the cybercrime ecosystem.

A ransomware-as-a-service model is employed by the group which combines data theft with system encryption, as well as allowing affiliates to conduct attacks using its malware and infrastructure in return for a share of the revenue. 

The Rhysida malware has been responsible for a number of attacks across several sectors since its inception, and healthcare is one of the most frequent targets. A number of the group's intrusions have previously been credited to hospitals and care providers, but the Cytek incident is the group's first confirmed attack on a healthcare manufacturer, aligning with a trend which is increasingly involving ransomware activity that extends beyond direct patient care companies to include medical suppliers and technology companies. 

Research indicates that these types of attacks are capable of exposing millions of records, disrupting critical services, and amplifying risks to patient privacy as well as operational continuity, which highlights that the threat landscape facing the U.S. healthcare system is becoming increasingly complex. 

As a result of the disruption that occurred in the U.S. healthcare system, organizations and individuals affected by the incident have stepped back and examined how Change Healthcare fits into the system and why its outage was so widespread. 

With over 15 years of experience in healthcare technology and payment processing under the UnitedHealth Group umbrella, Change Healthcare has played a critical role as a vital intermediary between healthcare providers, insurers, and pharmacists by verifying eligibility, getting prior authorizations, submitting claims, and facilitating payment processes. 

A failure of this organization in its role at the heart of these transactions can lead to cascading delays in prescription, reimbursement, and claim processing across the country when its operational failure extends far beyond the institution at fault. 

According to findings from a survey conducted by the American Medical Association, which documented widespread financial and administrative stress among physician practices, this impact was of a significant magnitude. There have been numerous reports of suspended or delayed claims payments, the inability to submit claims, or the inability to receive electronic remittance advice, and widespread service interruptions as a consequence. 

Several practices cited significant revenue losses, forcing some to rely on personal funds or find an alternative clearinghouse in order to continue to operate. There have been some relief measures relating to emergency funding and advance payments, but disruptions continue to persist, prompting UnitedHealth Group to disburse more than $2 billion towards these efforts. 

Moreover, patients have suffered indirect effects not only through billing delays, unexpected charges, and notifications about potential data exposures but also outside the provider community. This has contributed to increased public concern and renewed scrutiny of the systemic risks posed by the compromise of an organization's central healthcare infrastructure provider. 

The fact that the incidents have been combined in this fashion highlights a clear and cautionary message for healthcare stakeholders: it is imperative to treat cyber resilience as a strategic priority, rather than a purely technical function. 

Considering that large-scale ransomware campaigns have been running for some time now, undetected intrusions for a prolonged period of time, as well as failures at critical intermediaries, it is evident that even a single breach can escalate into a systemic disruption that affects providers, manufacturers, and patients. 

A growing number of industry leaders and regulators are called upon to improve the oversight of third parties, enhance the tools available for breach detection, and integrate financial, legal, and operational preparedness into their cybersecurity strategies. 

It is imperative that healthcare organizations adopt proactive, enterprise-wide approaches to risk management as the volume and value of healthcare data continues to grow. Organizations that fail to adopt this approach may not only find themselves unable to cope with cyber incidents, but also struggle to maintain trust, continuity, and care delivery in the aftermath of them.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

Fake Tax Emails Used to Target Indian Users in New Malware Campaign

 


A newly identified cyberattack campaign is actively exploiting trust in India’s tax system to infect computers with advanced malware designed for long-term surveillance and data theft. The operation relies on carefully crafted phishing emails that impersonate official tax communications and has been assessed as potentially espionage-driven, though no specific hacking group has been confirmed.

The attack begins with emails that appear to originate from the Income Tax Department of India. These messages typically warn recipients about penalties, compliance issues, or document verification, creating urgency and fear. Victims are instructed to open an attached compressed file, believing it to be an official notice.

Once opened, the attachment initiates a hidden infection process. Although the archive contains several components, only one file is visible to the user. This file is disguised as a legitimate inspection or review document. When executed, it quietly loads a concealed malicious system file that operates without the user’s awareness.

This hidden component performs checks to ensure it is not being examined by security analysts and then connects to an external server to download additional malicious code. The next stage exploits a Windows system mechanism to gain administrative privileges without triggering standard security prompts, allowing the attackers deeper control over the system.

To further avoid detection, the malware alters how it identifies itself within the operating system, making it appear as a normal Windows process. This camouflage helps it blend into everyday system activity.

The attackers then deploy another installer that adapts its behavior based on the victim’s security setup. If a widely used antivirus program is detected, the malware does not shut it down. Instead, it simulates user actions, such as mouse movements, to quietly instruct the antivirus to ignore specific malicious files. This allows the attack to proceed while the security software remains active, reducing suspicion.

At the core of the operation is a modified banking-focused malware strain known for targeting organizations across multiple countries. Alongside it, attackers install a legitimate enterprise management tool originally designed for system administration. In this campaign, the software is misused to remotely control infected machines, monitor user behavior, and manage stolen data centrally.

Supporting files are also deployed to strengthen control. These include automated scripts that change folder permissions, adjust user access rights, clean traces of activity, and enable detailed logging. A coordinating program manages these functions to ensure the attackers maintain persistent access.

Researchers note that the campaign combines deception, privilege escalation, stealth execution, and abuse of trusted software, reflecting a high level of technical sophistication and clear intent to maintain prolonged visibility into compromised systems.

WhatsApp-Based Astaroth Banking Trojan Targets Brazilian Users in New Malware Campaign

 

A fresh look at digital threats shows malicious software using WhatsApp to spread the Astaroth banking trojan, mainly affecting people in Brazil. Though messaging apps are common tools for connection, they now serve attackers aiming to steal financial data. This method - named Boto Cor-de-Rosa by analysts at Acronis Threat Research - stands out because it leans on social trust within widely used platforms. Instead of relying on email or fake websites, hackers piggyback on real conversations, slipping malware through shared links. 
While such tactics aren’t entirely new, their adaptation to local habits makes them harder to spot. In areas where nearly everyone uses WhatsApp daily, blending in becomes easier for cybercriminals. Researchers stress that ordinary messages can now carry hidden risks when sent from compromised accounts. Unlike older campaigns, this one avoids flashy tricks, favoring quiet infiltration over noise. As behavior shifts online, so do attack strategies - quietly, persistently adapting. 

Acronis reports that the malware targets WhatsApp contact lists, sending harmful messages automatically - spreading fast with no need for constant hacker input. Notably, even though the main Astaroth component sticks with Delphi, and the setup script remains in Visual Basic, analysts spotted a fresh worm-style feature built completely in Python. Starting off differently this time, the mix of languages shows how cyber attackers now build adaptable tools by blending code types for distinct jobs. Ending here: such variety supports stealthier, more responsive attack systems. 

Astaroth - sometimes called Guildma - has operated nonstop since 2015, focusing mostly on Brazil within Latin America. Stealing login details and enabling money scams sits at the core of its activity. By 2024, several hacking collectives, such as PINEAPPLE and Water Makara, began spreading it through deceptive email messages. This newest push moves away from that method, turning instead to WhatsApp; because so many people there rely on the app daily, fake requests feel far more believable. 

Although tactics shift, the aim stays unchanged. Not entirely new, exploiting WhatsApp to spread banking trojans has gained speed lately. Earlier, Trend Micro spotted the Water Saci group using comparable methods to push financial malware like Maverick and a version of Casbaneierio. Messaging apps now appear more appealing to attackers than classic email phishing. Later that year, Sophos disclosed details of an evolving attack series labeled STAC3150, closely tied to previous patterns. This operation focused heavily on individuals in Brazil using WhatsApp, distributing the Astaroth malware through deceptive channels. 

Nearly all infected machines - over 95 percent - were situated within Brazilian territory, though isolated instances appeared across the U.S. and Austria. Running uninterrupted from early autumn 2025, the method leaned on compressed archives paired with installer files, triggering script-based downloads meant to quietly embed the malicious software. What Acronis has uncovered fits well with past reports. Messages on WhatsApp now carry harmful ZIP files sent straight to users. Opening one reveals what seems like a safe document - but it is actually a Visual Basic Script. Once executed, the script pulls down further tools from remote servers. 

This step kicks off the full infection sequence. After activation, this malware splits its actions into two distinct functions. While one part spreads outward by pulling contact data from WhatsApp and distributing infected files without user input, the second runs hidden, observing online behavior - especially targeting visits to financial sites - to capture login details. 

It turns out the software logs performance constantly, feeding back live updates on how many messages succeed or fail, along with transmission speed. Attackers gain a constant stream of operational insight thanks to embedded reporting tools spotted by Acronis.

Looking Beyond the Hype Around AI Built Browser Projects


Cursor, the company that provides an artificial intelligence-integrated development environment, recently gained attention from the industry after suggesting that it had developed a fully functional browser using its own artificial intelligence agents, which is known as the Cursor AI-based development environment. In a series of public statements made by Cursor chief executive Michael Truell, it was claimed that the browser was built with the use of GPT-5.2 within the Cursor platform. 


Approximately three million lines of code are spread throughout thousands of files in Truell's project, and there is a custom rendering engine in Rust developed from scratch to implement this project. 

Moreover, he explained that the system also supports the main features of the browser, including HTML parsing, CSS cascading and layout, text shaping, painting, and a custom-built JavaScript virtual machine that is responsible for the rendering of HTML on the browser. 

Even though the statements did not explicitly assert that a substantial amount of human involvement was not involved with the creation of the browser, they have sparked a heated debate within the software development community about whether or not the majority of the work is truly attributed to autonomous AI systems, and whether or not these claims should be interpreted in light of the growing popularity of AI-based software development in recent years. 

There are a couple of things to note about the episode: it unfolds against a backdrop of intensifying optimism regarding generative AI, an optimism that has inspired unprecedented investment in companies across a variety of industries. In spite of the optimism, a more sobering reality is beginning to emerge in the process. 

A McKinsey study indicates that despite the fact that roughly 80 percent of companies report having adopted the most advanced AI tools, a similar percentage has seen little to no improvement in either revenue growth or profitability. 

In general, general-purpose AI applications are able to improve individual productivity, but they have rarely been able to translate their incremental time savings into tangible financial results. While higher value, domain-specific applications continue to stall in the experimental or pilot stage, analysts increasingly describe this disconnect as the generative AI value paradox since higher-value, domain-specific applications tend to stall in the experimental or pilot stages. 

There has been a significant increase in tension with the advent of so-called agentic artificial intelligence, which essentially is an autonomous system that is capable of planning, deciding, and acting independently in order to achieve predefined objectives. 

It is important to note, however, that these kinds of systems offer a range of benefits beyond assistive tools, as well as increasing the stakes for credibility and transparency in the case of Cursor's browser project, in which the decision to make its code publicly available was crucial. 

Developers who examined the repository found the software frequently failed to compile, rarely ran as advertised, and rarely exceeded the capabilities implied by the product's advertising despite enthusiastic headlines. 

If one inspects and tests the underlying code closely, it becomes evident that the marketing claims are not in line with the actual code. Ironically, most developers found the accompanying technical document—which detailed the project's limitations and partial successes—to be more convincing than the original announcement of the project. 

During a period of about a week, Cursor admits that it deployed hundreds of GPT-5.2-style agents, which generated about three million lines of code, assembling what on the surface amounted to a partially functional browser prototype. 

Several million dollars at prevailing prices for frontier AI models is the cost of the experiment, as estimated by Perplexity, an AI-driven search and analysis platform. At such times, it would be possible to consume between 10 and 20 trillion tokens during the experiment, which would translate into a cost of several million dollars at the current price. 

Although such figures demonstrate the ambition of the effort, they also emphasize the skepticism that exists within the industry at the moment: scale alone does not equate to sustained value or technical maturity. It can be argued that a number of converging forces are driving AI companies to increasingly target the web browser itself, rather than focusing on plug-ins or standalone applications.

For many years, browsers have served as the most valuable source of behavioral data - and, by extension, an excellent source of ad revenue - and this has been true for decades. They have been able to capture search queries, clicks, and browsing patterns for a number of years, which have paved the way for highly profitable ad targeting systems.

Google has gained its position as the world's most powerful search engine by largely following this model. The browser provides AI providers with direct access to this stream of data exhaust, which reduces the dependency on third party platforms and secures a privileged position in the advertising value chain. 

A number of analysts note that controlling the browser can also be a means of anchoring a company's search product and the commercial benefits that follow from it as well. It has been reported that OpenAI's upcoming browser is explicitly intended to collect information on users' web behavior from first-party sources, a strategy intended to challenge Google's ad-driven ecosystem. 

Insiders who have been contacted by the report suggest they were motivated to build a browser rather than an extension for Chrome or Edge because they wanted more control over their data. In addition to advertising, the continuous feedback loop that users create through their actions provides another advantage: each scroll, click, and query can be used to refine and personalize AI models, which in turn strengthens a product over time.

In the meantime, advertising remains one of the few scalable monetization paths for consumer-facing artificial intelligence, and both OpenAI and Perplexity appear to be positioning their browsers accordingly, as highlighted by recent hirings and the quiet development of ad-based services. 

Meanwhile, AI companies claim that browsers offer the chance to fundamentally rethink the user experience of the web, arguing that it can be remodeled in the future. Traditional browsing, which relied heavily on tabs, links, and manual comparison, has become increasingly viewed as an inefficient and cognitively fragmented activity. 

By replacing navigation-heavy workflows with conversational, context-aware interactions, artificial intelligence-first browsers aim to create a new type of browsing. It is believed that Perplexity's Comet browser, which is positioned as an “intelligent interface”, can be accessed by the user at any moment, enabling the artificial intelligence to research, summarize, and synthesize information in real time, thus creating a real-time “intelligent interface.” 

Rather than clicking through multiple pages, complex tasks are condensed into seamless interactions that maintain context across every step by reducing the number of pages needed to complete them. As with OpenAI's planned browser, it is likely to follow a similar approach by integrating a ChatGPT-like assistant directly into the browsing environment, allowing users to act on information without leaving the page. 

The browser is considered to be a constant co-pilot, one that will be able to draft messages, summarise content, or perform transactions on the user's behalf, rather than just performing searches. These shifts have been described by some as a shift from search to cognition. 

The companies who are deeply integrating artificial intelligence into everyday browsing hope that, in addition to improving convenience, they will be able to keep their users engaged in their ecosystems for longer periods of time, strengthening their brand recognition and boosting habitual usage. Having a proprietary browser also enables the integration of artificial intelligence services and agent-based systems that are difficult to deliver using third-party platforms. 

A comprehensive understanding of browser architecture provides companies with the opportunity to embed language models, plugins, and autonomous agents at a foundational level of the browser. OpenAI's browser, for instance, is expected to be integrated directly with the company's emerging agent platform, enabling software capable of navigating websites, completing forms, and performing multi-step actions on its own.

It is apparent that further ambitions are evident elsewhere too: 
The Browser Company's Dia features an AI assistant right in the address bar, offering a combination of search and chat functionality along with task automation, while maintaining awareness of the context of the user across multiple tabs. These types of browsers are an indicator of a broader trend toward building browsers around artificial intelligence rather than adding artificial intelligence features to existing browsers. 

By following such a method, a company's AI services become the default experience for users whenever they search or interact with the web. This ensures that the company's AI services are not optional enhancements, but rather the default experience. 

Last but not least, competitive pressure is a serious issue. Search and browser dominance by Google have long been mutually reinforcing each other, channeling data and traffic through Chrome into the company's advertising empire in an effort to consolidate its position.

A direct threat to this structure is the development of AI first browsers, whose aim is to divert users away from traditional search and towards AI-mediated discovery as a result. 

The browser that Perplexity is creating is part of a broader effort to compete with Google in search. However, Reuters reports that OpenAI is intensifying its rivalry with Google by moving into browsers. The ability to control the browser allows AI companies to intercept user intent at an earlier stage, so that they are not dependent on existing platforms and are protected from future changes in default settings and access rules that may be implemented. 

Furthermore, the smaller AI players must also be prepared to defend themselves from the growing integration of artificial intelligence into their browsers, as Google, Microsoft, and others are rapidly integrating it into their own browsers.

In a world where browsers remain a crucial part of our everyday lives as well as work, the race to integrate artificial intelligence into these interfaces is becoming increasingly important, and many observers are already beginning to describe this conflict as the beginning of a new era in browsers driven by artificial intelligence.

In the context of the Cursor episode and the trend toward AI-first browsers, it is imperative to note a cautionary mark for an industry rushing ahead of its own trials and errors. It is important to recognize, however, that open repositories and independent scrutiny continue to be the ultimate arbiters of technical reality, regardless of the public claims of autonomy and scale. 

It is becoming increasingly apparent that a number of companies are repositioning the browser as a strategic battleground, promising efficiency, personalization, and control - and that developers, enterprises, and users are being urged to separate ambition from implementation in real life. 

Among analysts, it appears that AI-powered browsers will not fail, but rather that their impact will be less dependent on headline-grabbing demonstrations than on evidence-based reliability, transparent attribution of human work to machine work, and a thoughtful evaluation of security and economic trade-offs. During this period of speed and spectacle in an industry that is known for its speed and spectacle, it may yet be the scariest resource of all.

Featured