Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label organizations. Show all posts

AI Can Answer You, But Should You Trust It to Guide You?



Artificial intelligence tools are expanding faster than any digital product seen before, reaching hundreds of millions of users in a short period. Leading technology companies are investing heavily in making these systems sound approachable and emotionally responsive. The goal is not only efficiency, but trust. AI is increasingly positioned as something people can talk to, rely on, and feel understood by.

This strategy is working because users respond more positively to systems that feel conversational rather than technical. Developers have learned that people prefer AI that is carefully shaped for interaction over systems that are larger but less refined. To achieve this, companies rely on extensive human feedback to adjust how AI responds, prioritizing politeness, reassurance, and familiarity. As a result, many users now turn to AI for advice on careers, relationships, and business decisions, sometimes forming strong emotional attachments.

However, there is a fundamental limitation that is often overlooked. AI does not have personal experiences, beliefs, or independent judgment. It does not understand success, failure, or responsibility. Every response is generated by blending patterns from existing information. What feels like insight is often a safe and generalized summary of commonly repeated ideas.

This becomes a problem when people seek meaningful guidance. Individuals looking for direction usually want practical insight based on real outcomes. AI cannot provide that. It may offer comfort or validation, but it cannot draw from lived experience or take accountability for results. The reassurance feels real, while the limitations remain largely invisible.

In professional settings, this gap is especially clear. When asked about complex topics such as pricing or business strategy, AI typically suggests well-known concepts like research, analysis, or optimization. While technically sound, these suggestions rarely address the challenges that arise in specific situations. Professionals with real-world experience know which mistakes appear repeatedly, how people actually respond to change, and when established methods stop working. That depth cannot be replicated by generalized systems.

As AI becomes more accessible, some advisors and consultants are seeing clients rely on automated advice instead of expert guidance. This shift favors convenience over expertise. In response, some professionals are adapting by building AI tools trained on their own methods and frameworks. In these cases, AI supports ongoing engagement while allowing experts to focus on judgment, oversight, and complex decision-making.

Another overlooked issue is how information shared with generic AI systems is used. Personal concerns entered into such tools do not inform better guidance or future improvement by a human professional. Without accountability or follow-up, these interactions risk becoming repetitive rather than productive.

Artificial intelligence can assist with efficiency, organization, and idea generation. However, it cannot lead, mentor, or evaluate. It does not set standards or care about outcomes. Treating AI as a substitute for human expertise risks replacing growth with comfort. Its value lies in support, not authority, and its effectiveness depends on how responsibly it is used.

AI in Cybercrime: What’s Real, What’s Exaggerated, and What Actually Matters

 



Artificial intelligence is increasingly influencing the cyber security infrastructure, but recent claims about “AI-powered” cybercrime often exaggerate how advanced these threats currently are. While AI is changing how both defenders and attackers operate, evidence does not support the idea that cybercriminals are already running fully autonomous, self-directed AI attacks at scale.

For several years, AI has played a defining role in cyber security as organisations modernise their systems. Machine learning tools now assist with threat detection, log analysis, and response automation. At the same time, attackers are exploring how these technologies might support their activities. However, the capabilities of today’s AI tools are frequently overstated, creating a disconnect between public claims and operational reality.

Recent attention has been driven by two high-profile reports. One study suggested that artificial intelligence is involved in most ransomware incidents, a conclusion that was later challenged by multiple researchers due to methodological concerns. The report was subsequently withdrawn, reinforcing the importance of careful validation. Another claim emerged when an AI company reported that its model had been misused by state-linked actors to assist in an espionage operation targeting multiple organisations.

According to the company’s account, the AI tool supported tasks such as identifying system weaknesses and assisting with movement across networks. However, experts questioned these conclusions due to the absence of technical indicators and the use of common open-source tools that are already widely monitored. Several analysts described the activity as advanced automation rather than genuine artificial intelligence making independent decisions.

There are documented cases of attackers experimenting with AI in limited ways. Some ransomware has reportedly used local language models to generate scripts, and certain threat groups appear to rely on generative tools during development. These examples demonstrate experimentation, not a widespread shift in how cybercrime is conducted.

Well-established ransomware groups already operate mature development pipelines and rely heavily on experienced human operators. AI tools may help refine existing code, speed up reconnaissance, or improve phishing messages, but they are not replacing human planning or expertise. Malware generated directly by AI systems is often untested, unreliable, and lacks the refinement gained through real-world deployment.

Even in reported cases of AI misuse, limitations remain clear. Some models have been shown to fabricate progress or generate incorrect technical details, making continuous human supervision necessary. This undermines the idea of fully independent AI-driven attacks.

There are also operational risks for attackers. Campaigns that depend on commercial AI platforms can fail instantly if access is restricted. Open-source alternatives reduce this risk but require more resources and technical skill while offering weaker performance.

The UK’s National Cyber Security Centre has acknowledged that AI will accelerate certain attack techniques, particularly vulnerability research. However, fully autonomous cyberattacks remain speculative.

The real challenge is avoiding distraction. AI will influence cyber threats, but not in the dramatic way some headlines suggest. Security efforts should prioritise evidence-based risk, improved visibility, and responsible use of AI to strengthen defences rather than amplify fear.



Salesloft Hack Shows How Developer Breaches Can Spread

 



Salesloft, a popular sales engagement platform, has revealed that a breach of its GitHub environment earlier this year played a key role in a recent wave of data theft attacks targeting Salesforce customers.

The company explained that attackers gained access to its GitHub repositories between March and June 2025. During this time, intruders downloaded code, added unauthorized accounts, and created rogue workflows. These actions gave them a foothold that was later used to compromise Drift, Salesloft’s conversational marketing product. Drift integrates with major platforms such as Salesforce and Google Workspace, enabling businesses to automate chat interactions and sales pipelines.


How the breach unfolded

Investigators from cybersecurity firm Mandiant, who were brought in to assist Salesloft, found that the GitHub compromise was the first step in a multi-stage campaign. After the attackers established persistence, they moved into Drift’s cloud infrastructure hosted on Amazon Web Services (AWS). From there, they stole OAuth tokens, digital keys that allow applications to access user accounts without requiring passwords.

These stolen tokens were then exploited in August to infiltrate Salesforce environments belonging to multiple organizations. By abusing the access tokens, attackers were able to view and extract customer support cases. Many of these records contained sensitive information such as cloud service credentials, authentication tokens, and even Snowflake-related access keys.


Impact on organizations

The theft of Salesforce data affected a wide range of technology companies. Attackers specifically sought credentials and secrets that could be reused to gain further access into enterprise systems. According to Salesloft’s August 26 update, the campaign’s primary goal was credential theft rather than direct financial fraud.

Threat intelligence groups have tracked this operation under the identifier UNC6395. Meanwhile, reports also suggest links to known cybercrime groups, although conclusive attribution remains unsettled.


Response and recovery

Salesloft said it has since rotated credentials, hardened its defenses, and isolated Drift’s infrastructure to prevent further abuse. Mandiant confirmed that containment steps have been effective, with no evidence that attackers maintain ongoing access. Current efforts are focused on forensic review and long-term assurance.

Following weeks of precautionary suspensions, Salesloft has now restored its Salesforce integrations. The company has also published detailed instructions to help customers safely resume data synchronization.

The incident underlines the risks of supply-chain style attacks, where a compromise at one service provider can cascade into breaches at many of its customers. It underscores the importance of securing developer accounts, closely monitoring access tokens, and limiting sensitive data shared in support cases.

For organizations, best practices now include regularly rotating OAuth tokens, auditing third-party app permissions, and enforcing stronger segmentation between critical systems.