Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Security Codes. Show all posts

Think Twice Before Using Text Messages for Security Codes — Here’s a Safer Way

 



In today’s digital world, many of us protect our online accounts using two-step verification. This process, known as multi-factor authentication (MFA), usually requires a password and an extra code, often sent via SMS, to log in. It adds an extra layer of protection, but there’s a growing concern: receiving these codes through text messages might not be as secure as we think.


Why Text Messages Aren’t the Safest Option

When you get a code on your phone, you might assume it’s sent directly by the company you’re logging into—whether it’s your bank, email, or social media. In reality, these codes are often delivered by external service providers hired by big tech firms. Some of these third-party firms have been connected to surveillance operations and data breaches, raising serious concerns about privacy and security.

Worse, these companies operate with little public transparency. Several investigative reports have highlighted how this lack of oversight puts user information at risk. Additionally, government agencies such as the U.S. Cybersecurity and Infrastructure Security Agency (CISA) have warned people not to rely on SMS for authentication. Text messages are not encrypted, which means hackers who gain access to a telecom network can intercept them easily.


What Should You Do Instead?

Don’t ditch multi-factor authentication altogether. It’s still a critical defense against account hijacking. But you should consider switching to a more secure method—such as using an authenticator app.


How Authenticator Apps Work

Authenticator apps are programs installed on your smartphone or computer. They generate temporary codes for your accounts that refresh every 30 seconds. Because these codes live inside your device and aren’t sent over the internet or phone networks, they’re far more difficult for criminals to intercept.

Apps like Google Authenticator, Microsoft Authenticator, LastPass, and even Apple’s built-in password tools provide this functionality. Most major platforms now allow you to connect an authenticator app instead of relying on SMS.


Want Even Better Protection? Try Passkeys

If you want the most secure login method available today, look into passkeys. These are a newer, password-free login option developed by a group of leading tech companies. Instead of typing in a password or code, you unlock your account using your face, fingerprint, or device PIN.

Here’s how it works: your device stores a private key, while the website keeps the matching public key. Only when these two keys match—and you prove your identity through a biometric scan — are you allowed to log in. Because there are no codes or passwords involved, there’s nothing for hackers to steal or intercept.

Passkeys are also backed up to your cloud account, so if you lose your device, you can still regain access securely.


Multi-factor authentication is essential—but how you receive your codes matters. Avoid text messages when possible. Opt for an authenticator app, or better yet, move to passkeys where available. Taking this step could be the difference between keeping your data safe or leaving it vulnerable.

Guidelines on What Not to Share with ChatGPT: A Formal Overview

 


A simple device like ChatGPT has unbelievable power, and it has revolutionized our experience of interacting with computers in such a profound way. There are, however, some limitations that it is important to understand and bear in mind when using this tool. 

Using ChatGPT, OpenAI has seen a massive increase in revenue resulting from a massive increase in content. There were 10 million dollars of revenue generated by the company every year. It, however, grew from 1 million dollars in to 200 million dollars in the year 2023. In the coming years, the revenue is expected to increase to over one billion dollars by the end of 2024, which is even higher than what it is now. 

A wide array of algorithms is included in the ChatGPT application that is so powerful that it is capable of generating any text the users want, from a simple math sum to a complex rocket theory question. It can do them all and more! It is crucial to acknowledge the advantages that artificial intelligence can offer and to acknowledge their shortcomings as the prevalence of chatbots powered by artificial intelligence continues to rise.  

To be successful with AI chatbots, it is essential to understand that there are certain inherent risks associated with their use, such as the potential for cyber attacks and privacy issues.  A major change in Google's privacy policy recently made it clear that the company is considering providing its AI tools with the data that it has collected from web posts to train those models and tools.  

It is equally troubling that ChatGPT retains chat logs to improve the model and to improve the uptime of the service. Despite this, there is still a way to address this concern, and it involves not sharing certain information with chatbots that are based on artificial intelligence. Jeffrey Chester, executive director of the Center for Digital Democracy, an organization dedicated to digital rights advocacy stated these tools should be viewed by consumers with suspicion at least, since as with so many other popular technologies – they are all heavily influenced by the marketing and advertising industries.  

The Limits Of ChatGPT 


As the system was not enabled for browsing (which is a requirement for ChatGPT Plus), it generated responses based on the patterns and information it learned throughout its training, which included a range of internet texts while it was training until September 2021 when the training cut-off will be reached.  

Despite that, it is incapable of understanding the context in the same way as people do and does not know anything in the sense of "knowing" anything. ChatGPT is famous for its impressive and relevant responses a great deal of the time, but it is not infallible. The answers that it produces can be incorrect or unintelligible for several reasons. 

Its proficiency largely depends on the quality and clarity of the prompt given. 

1. Banking Credentials 


The Consumer Financial Protection Bureau (CFPB) published a report on June 6 about the limitations of chatbot technology as the complexity of questions increases. According to the report, implementing chatbot technology could result in financial institutions violating federal consumer protection laws, which is why the potential for violations of federal consumer protection laws is high. 

According to the Consumer Financial Protection Bureau (CFPB), the number of consumer complaints has increased due to a variety of issues that include resolving disputes, obtaining accurate information, receiving good customer service, seeking assistance from human representatives, and maintaining personal information security. In light of this fact, the CFPB advises financial institutions to refrain from solely using chatbots as part of their overall business model.  

2. Personal Identifiable Information (PII). 


Whenever users share sensitive personal information that can be used to identify users personally, they need to be careful to protect their privacy and minimise the risk that it will be misused. The user's full name, home address, social security number, credit card number, and any other information that can identify them as an individual is included in this category. The importance of protecting these sensitive details is paramount to ensuring their privacy and preventing potential harm from unauthorised use. 

3. Confidential information about the user's workplace


Users should exercise caution and refrain from sharing private company information when interacting with AI chatbots. It is crucial to understand the potential risks associated with divulging sensitive data to these virtual assistants. 

Major tech companies like Apple, Samsung, JPMorgan, and Google have even implemented stringent policies to prohibit the use of AI chatbots by their employees, recognizing the importance of protecting confidential information. 

A recent Bloomberg article shed light on an unfortunate incident involving a Samsung employee who inadvertently uploaded confidential code to a generative AI platform while utilizing ChatGPT for coding tasks. This breach resulted in the unauthorized disclosure of private information about Samsung, which subsequently led to the company imposing a complete ban on the use of AI chatbots. 

Such incidents highlight the need for heightened vigilance and adherence to security measures when leveraging AI chatbots. 

4. Passwords and security codes 


In the event that a chatbot asks you for passwords, PINs, security codes, or any other confidential access credentials, do not give them these things. It is prudent to prioritise your safety and refrain from sharing sensitive information with AI chatbots, even though these chatbots are designed with privacy in mind. 

For your accounts to remain secure and for your personal information to be protected from the potential of unauthorised access or misuse, it is paramount that you secure your passwords and access credentials.

In an age marked by the progress of AI chatbot technology, the utmost importance lies in the careful protection of personal and sensitive information. This report underscores the imperative necessity for engaging with AI-driven virtual assistants in a responsible and cautious manner, with the primary objective being the preservation of privacy and the integrity of data. It is advisable to remain well-informed and to exercise prudence when interacting with these potent technological tools.