Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Company. Show all posts

Online Jobseekers Beware: Strategies to Outsmart Scammers

 


The number of employment scams is increasing, and the number of job seekers who are targets of cunning scammers is also on the rise. A person who is seeking a new job is advised to be vigilant to these scams and to be aware of what to look out for to better protect themselves against them if they are searching for one. 

Precisely what is the scam? 

On fake websites that look like they belong to reputable companies, criminals will pose as them to post fictitious job descriptions that require applicants to apply for fictitious jobs. The scammers will then make false job offers to candidates looking for a job. 

Sometimes, the fraudster may ask for personal information, like a person's address, bank details, or personal information like a passport number. It is becoming increasingly common for these scams to take the form of legitimate recruitment activities, and they often appear to be recruiting through third-party websites or direct email exchanges. 

It is becoming increasingly common for employers to be caught up in this sort of scam, which is known as recruitment fraud. A scammer has been known to target job seekers with fake job openings on LinkedIn as the social networking site receives more than 100 job application submissions per second. 

Approximately two-thirds of British users have been targeted in the last several years, according to a study conducted by NordLayer, a security firm. Scammers do not limit themselves to LinkedIn, with scammers exploiting other genuine, well-known job websites as well as sending email solicitations directly to university students by targeting them directly within their email addresses. 

Scammers employ two main methods to con their victims. A job offer with basic information about the company and its job position sounds very interesting to job seekers, and there is a link that says that if they click on it, a presentation with detailed information about the company and the job role will appear, says Jedrzej Pyzik, a recruitment consultant at the financial recruitment firm FTeam.

It has been observed that after clicking through the link, there are usually some landing pages that require the user to download a certain program, log in, and provide personal information - this is the most common one that has noticed the most, said Jedrzej. 

When that data is obtained, it can be used to steal the job seeker's identity, or even to open a bank account in their name or to apply for credit in their name if the job seeker is not present. Another popular scam involves asking "successful" job applicants to send over a substantial amount of money upfront to have the money paid back when they are hired - a practice known as advance fee scams. 

If a person is told that the amount can be credited towards training fees, criminal background checks by the Disclosure and Barring Service (DBS), travel fees like visas, or equipment that is needed for the job, they may feel more inclined to apply. 

The problem is that if a check is ever received to cover these costs, it will bounce. A large part of the problem is associated with fake job ads, which are especially common when it comes to the recruitment process for students and recent graduates, who may be considered to be less knowledgeable about it. 

Several scams have been targeting US university students lately, according to the security firm Proofpoint, offering jobs in biosciences, healthcare, and biotechnology fields, mostly in recent months. These scams appear to have targeted students in various parts of the country. 

Is there a way to protect from these Frauds? 

To identify phishing scams, follow these five tips: 

It is advisable to avoid generic emails at all costs. A lot of effort is put into casting a wide net when scammers do not include specific information in their scams. It is always a good idea to be cautious when receiving an email that seems overly generic. 

The spelling of domain names and email addresses should be checked very carefully. Even a slight change of lower and uppercase letters can result in a redirect to a different domain where the job seeker may be a victim of identity theft. 

A recruitment agent from an authentic company will most often ask applicants for an in-person interview if the candidate truly meets all the requirements for the job. 

A recruiter will never ask a prospective candidate for financial information or payments as part of an employment application or as a condition of employment or anything similar.

Those who post a job stating the position is the "perfect job" usually make this claim as they rely on the high pay they will offer for positions that do not require any skills and experience. 

It is likely that such a job is a scam and is just too good to be true. There is a concerted effort being made by job platforms to eliminate job scams in their platforms. 

A report from LinkedIn claims that 99.3% of the spam and scams it detects are caught by its automated defences, and 99.6% of the fake accounts it detects are blocked before members even know they exist. Additionally, job websites are also doing their part to help those looking for work. 

It is the company's policy to perform automatic verification processes that confirm the validity of its advertisers, according to Keith Rosser, director of group risk at Reed, a process which involves checking Company House information, the domain information of the company as well as the email addresses and physical addresses of the company's advertisers. 

The job seekers are advised, however, to be cautious and to check whether the employer is legitimate before sharing any personal information with them. Before sharing any personal information, it would be wise to verify that the organization exists.

Google's Bard AI Bot Error Costed the Company $100 Billion Shares

Google is looking for forms to reassure people that it is still at the forefront of artificial intelligence technology. So far, the internet behemoth appears to be getting it wrong. An advertisement for its new AI bot showed it answering a question incorrectly. 
Alphabet shares fell more than 7% on Wednesday, erasing $100 billion (£82 billion) from the company's market value. In the promotion for the bot, known as Bard, which was released on Twitter on Monday, the bot was asked what to tell a nine-year-old about James Webb Space Telescope discoveries.

It responded that the telescope was the first to take images of a planet outside the Earth's solar system, when in fact the European Very Large Telescope did so in 2004 - a mistake quickly corrected by astronomers on Twitter.

"Why didn't you fact check this example before sharing it?" Chris Harrison, a fellow at Newcastle University, replied to the tweet.

Investors were also underwhelmed by the company's presentation on its plans to incorporate artificial intelligence into its products. Since late last year, when Microsoft-backed OpenAI revealed new ChatGPT software, Google has been under fire. It rapidly became a viral sensation due to its ability to pass business school exams, compose song lyrics, and answer other questions.

A Google spokesperson stated the error emphasized "the importance of a rigorous testing process, something that we're kicking off this week with our Trusted Tester programme".

"We'll combine external feedback with our own internal testing to make sure Bard's responses meet a high bar for quality, safety, and roundedness in real-world information," they said.
 
Alphabet, Google's parent company, laid off 12,000 employees last month, accounting for about 6% of its global workforce.


Instagram to roll out new features to counter cyberbullying

Bullying. Sadly, it’s a pandemic that is not just restricted to the school grounds of our younger and geekier selves, but something which tends to follow people around regardless of age and even privacy. Cyberbullying has become more widespread than traditional bullying and is often known to be equally traumatic for its victims. A trend which tech companies are trying to increasingly address.

Instagram has new features (via The Verge) on its way that it’s hoping will address cyberbullying by finally allowing people to “shadow ban” others and a new artificial intelligence that is designed to flag potentially offensive comments. Both initiatives are looking to be put into testing soon.

The “shadow ban” will essentially provide a way for a user to restrict another user, without that person realising they are essentially banned. So they will still be able to see your post and comment on them, but their comments will only be visible to themselves meaning you and the rest of the people you actually want to interact with can keep talking in peace while said person wonders why their snarky comments are not getting any responses from you.

Along with this feature, Instagram is also hoping to leverage a new AI to flag potentially offensive comments and ask the commenter if they really want to follow through with posting. They’ll be given the opportunity to undo their comment, and Instagram says that during tests, it encouraged “some” people to reflect on and undo what they wrote. A nice touch, though given the emotional state most bullies are in, it’s unlikely to alter course for most people. Still, it’s better than nothing.

Instagram has already tested multiple bully-focused features, including an offensive comment filter that automatically screens bullying comments that “contain attacks on a person’s appearance or character, as well as threats to a person’s well-being or health” as well as a similar feature for photos and captions. So this shows a real effort by Facebook to tackle this problem on the platform.

Can AI become a new tool for hackers?

Over the last three years, the use of AI in cybersecurity has been an increasingly hot topic. Every new company that enters the market touts its AI as the best and most effective. Existing vendors, especially those in the enterprise space, are deploying AI  to reinforce their existing security solutions. Use of artificial intelligence (AI) in cybersecurity is enabling IT professionals to predict and react to emerging cyber threats quicker and more effectively than ever before. So how can they expect to respond when AI falls into the wrong hands?

Imagine a constantly evolving and evasive cyberthreat that could target individuals and organisations remorselessly. This is the reality of cybersecurity in an era of artificial intelligence (AI).

There has been no reduction in the number of breaches and incidents despite the focus on AI. Rajashri Gupta, Head of AI, Avast sat down with Enterprise Times to talk about AI and cyber security and explained that part of the challenge was not just having enough data to train an AI but the need for diverse data.

This is where many new entrants into the market are challenged. They can train an AI on small sets of data but is it enough? How do they teach the AI to detect the difference between a real attack and false positive? Gupta talked about this and how Avast is dealing with the problem.

During the podcast, Gupta also touched on the challenge of ethics for AI and how we deal with privacy. He also talked about IoT and what AI can deliver to help spot attacks against those devices. This is especially important for Avast who are to launch a new range of devices for the home security market this year.

AI has shaken up with automated threat prevention, detection and response revolutionising one of the fastest growing sectors in the digital economy.

Hackers are using AI to speed up polymorphic malware, causing it to constantly change its code so it can’t be identified.