Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Privacy Policy. Show all posts

Slack Faces Backlash Over AI Data Policy: Users Demand Clearer Privacy Practices

 

In February, Slack introduced its AI capabilities, positioning itself as a leader in the integration of artificial intelligence within workplace communication. However, recent developments have sparked significant controversy. Slack's current policy, which collects customer data by default for training AI models, has drawn widespread criticism and calls for greater transparency and clarity. 

The issue gained attention when Gergely Orosz, an engineer and writer, pointed out that Slack's terms of service allow the use of customer data for training AI models, despite reassurances from Slack engineers that this is not the case. Aaron Maurer, a Slack engineer, acknowledged the need for updated policies that explicitly detail how Slack AI interacts with customer data. This discrepancy between policy language and practical application has left many users uneasy. 

Slack's privacy principles state that customer data, including messages and files, may be used to develop AI and machine learning models. In contrast, the Slack AI page asserts that customer data is not used to train Slack AI models. This inconsistency has led users to demand that Slack update its privacy policies to reflect the actual use of data. The controversy intensified as users on platforms like Hacker News and Threads voiced their concerns. Many felt that Slack had not adequately notified users about the default opt-in for data sharing. 

The backlash prompted some users to opt out of data sharing, a process that requires contacting Slack directly with a specific request. Critics argue that this process is cumbersome and lacks transparency. Salesforce, Slack's parent company, has acknowledged the need for policy updates. A Salesforce spokesperson stated that Slack would clarify its policies to ensure users understand that customer data is not used to train generative AI models and that such data never leaves Slack's trust boundary. 

However, these changes have yet to address the broader issue of explicit user consent. Questions about Slack's compliance with the General Data Protection Regulation (GDPR) have also arisen. GDPR requires explicit, informed consent for data collection, which must be obtained through opt-in mechanisms rather than default opt-ins. Despite Slack's commitment to GDPR compliance, the current controversy suggests that its practices may not align fully with these regulations. 

As more users opt out of data sharing and call for alternative chat services, Slack faces mounting pressure to revise its data policies comprehensively. This situation underscores the importance of transparency and user consent in data practices, particularly as AI continues to evolve and integrate into everyday tools. 

The recent backlash against Slack's AI data policy highlights a crucial issue in the digital age: the need for clear, transparent data practices that respect user consent. As Slack works to update its policies, the company must prioritize user trust and regulatory compliance to maintain its position as a trusted communication platform. This episode serves as a reminder for all companies leveraging AI to ensure their data practices are transparent and user-centric.

What are the Privacy Measures Offered by Character AI?


In the era where virtual communication has played a tremendous part in people’s lives, it has also raised concerns regarding its corresponding privacy and data security. 

When it comes to AI-based platforms like Character AI, or generative AI, privacy concerns are apparent. Online users might as well wonder if someone other than them could have access to their chats with Character AI. 

Here, we are exploring the privacy measures that Character AI provides.

Character AI Privacy: Can Other People See a User’s Chats?

The answer is: No, other people can not have access to the private conversations or chats that a user may have had with the character in Character AI. Strict privacy regulations and security precautions are usually in place to preserve the secrecy of user communications. 

Nonetheless, certain data may be analyzed or employed in a combined, anonymous fashion to enhance the functionality and efficiency of the platform. Even with the most sophisticated privacy protections in place, it is always advisable to withhold sensitive or personal information.

1. Privacy Settings on Characters

Character AI gives users the flexibility to alter the characters they create visibility. Characters are usually set to public by default, making them accessible to the larger community for discovery and enjoyment. Nonetheless, the platform acknowledges the significance of personal choices and privacy issues

2. Privacy Options for Posts

Character AI allows users to post as well. Users can finely craft a post, providing them with a plethora of options to align with the content and sharing preferences.

Public posts are available to everyone in the platform's community and are intended to promote an environment of open and sharing creativity. 

Private posts, on the other hand, offer a more private and regulated sharing experience by restricting content viewing to a specific group of recipients. With this flexible approach to post visibility, users can customize their content-sharing experience to meet their own requirements.

3. Moderation of Community-Visible Content 

Character AI uses a vigilant content monitoring mechanism to keep a respectful and harmonious online community. When any content is shared or declared as public, this system works proactively to evaluate and handle it.

The aim is to detect and address any potentially harmful or unsuitable content, hence maintaining the platform's commitment to offering a secure and encouraging environment for users' creative expression. The moderation team puts a lot of effort into making sure that users can collaborate and engage with confidence, unaffected by worries about the suitability and calibre of the content in the community.

4. Consulting the Privacy Policy

Users who are looking for a detailed insight into Character AI’s privacy framework can also check its Privacy Policy document, which caters for their requirements. The detailed document involves a detailed understanding of the different attributes of data management, user rights and responsibilities, and the intricacies of privacy settings.

To learn more about issues like default visibility settings, data handling procedures, and the scope of content moderation, users can browse the Privacy Policy. It is imperative that users remain knowledgeable about these rules in order to make well-informed decisions about their data and privacy preferences.

Character AI's community norms, privacy controls, and distinctive features all demonstrate the company's commitment to privacy. To safeguard its users' data, it is crucial that users interact with these privacy settings, stay updated on platform regulations, and make wise decisions. In the end, how users use these capabilities and Character AI's dedication to ethical data handling will determine how secure the platform is.  

Study Finds: Online Games are Collecting Gamers’ Data Using Dark Designs


A recent study conducted by researchers, at Aalto University Department of Science, has revealed a dark design pattern in online games in the privacy policies and regulations which could be used in a dubious data collection tactic of online gamers. In order to enhance privacy in online games, the study also provides design guidelines for game producers and risk mitigation techniques for users.

There are about three billion gamers worldwide, and the gaming industry is worth $193 billion, almost twice as much as the combined value of the music and film industries.

Janne Lindqvist, associate professor of computer science at alto noted, “We had two supporting lines of inquiry in this study: what players think about games, and what games are really up to with respect to privacy.’

The study's authors were astonished by how complex the concerns of gamers were. 

“For example, participants said that, to protect their privacy, they would avoid using voice chat in games unless it was absolutely necessary. Our game analysis revealed that some games try to nudge people to reveal their online identities by offering things like virtual rewards,” said Lindqvist in a report published in the journal Proceedings of the ACM on Human-Computer Interaction.

The authors found examples of games that used "dark design," or interface decisions that coerce users into taking actions they otherwise would not. These might make it easier to gather player data, motivate users to connect their social media profiles, or permit the exchange of player information with outside parties. 

“When social media accounts are linked to games, players generally can’t know what access the games have to these accounts or what information they receive,” said Amel Bourdoucen, doctoral researcher in usable security at Aalto.

For instance, in some of the prevalent games, gamers can log in with their social media accounts. However, these games may not disclose the information they have gathered in the interaction. “Data handling practices of games are often hidden behind legal jargon in privacy policies,” said Bourdoucen.

It has thus been suggested to gaming authorities to specify the data they are collecting from the users, making sure that the gamers acknowledge and consent to their data being collected.

“This can increase the player’s awareness and sense of control in games. Gaming companies should also protect players’ privacy and keep them safe while playing online,” the authors wrote.

The study reveals that the gamers often had no idea that their chat-based conversations could be revealed to outside parties. Additionally, throughout a game, players were not informed about data sharing.

The study further notes that the players are aware of the risks and in fact take certain mitigation methods.

Lindqvist says that, “Games really should be fun and safe for everybody, and they should support the player’s autonomy. One way of supporting autonomy would be able to let players opt out from invasive data collection.”  

New Privacy Policy: X Plans on Collecting Users’ Biometric Data


According to a new privacy policy introduced by X (formerly known as Twitter), it will soon be collecting its users’ biometric data. 

The policy says that the company intends to compile individuals' employment and educational histories. According to the policy page, the modification will take effect on September 29. 

The updated policy reads, “Based on your consent, we may collect and use your biometric information for safety, security, and identification purposes.” While biometric data usually involves an individual’s physical characteristics, like their face or fingerprints, X has not yet specified the data they will be collecting. Also, X is yet to provide details on its plans to collect it. 

In a conversation with Bloomberg, the company noted that biometrics are only for premium users and will have the opportunity to submit their official ID and a photograph in order to add an additional layer of verification. According to Bloomberg, biometric information can be retrieved from both the ID and the image for matching reasons.

“This will additionally help us tie, for those that choose, an account to a real person by processing their government issued ID[…]This will also help X fight impersonation attempts and make the platform more secure,” X said in a statement to Bloomberg.

Last month, X had its name filed in a ‘proposed class action suit,’ where it was accused of illicitly capturing, storing and using Illinois residents’ biometric data,, including facial scans. The lawsuit says X “has not adequately informed individuals” that it “collects and/or stores their biometric identifiers in every photograph containing a face.”

In addition to the modified details of the biometric collection, X’s updated policy reveals its intention of storing users’ employment and education history. 

“We may collect and use your personal information (such as your employment history, educational history, employment preferences, skills and abilities, job search activity and engagement, and so on) to recommend potential jobs for you, to share with potential employers when you apply for a job, to enable employers to find potential candidates, and to show you more relevant advertising,” the updated policy reads.

The move seems to be related to the beta functionality of X, which enables verified companies on the network to publish job postings on their accounts. The prominent social networking platform has also established a legitimate @XHiring account. The hiring drive is a component of Musk's plans to make X an "everything app."  

Here's Why You Need to Read Privacy Policy Before Giving Consent

 

Over the last quarter-century, privacy policies—the lengthy, complex legal language you quickly scan through before mindlessly clicking "agree"—have grown both longer and denser. The average length of a privacy policy quadrupled between 1996 and 2021, according to a study published last year, and they also got much harder to understand. 

While examining the content of privacy policies, Isabel Wagner, an associate professor at De Montfort University, found several worrying trends, including the increasing usage of location data, the growing use of implicitly collected data, the absence of meaningful choice, the ineffective notification of privacy policy changes, an increase in data sharing with unidentified third parties, and the absence of specific information regarding security and privacy measures. 

While machine learning can be an effective tool for comprehending the world of privacy rules, its use within a privacy policy has the potential to cause a ruckus. Zoom is a good example. 

In a recent article from the technology news site Stack Diary, a clause in Zoom's terms of service that stated the company could employ user data to train artificial intelligence drew harsh criticism from users and privacy advocates. Zoom is a well-known web conferencing service that became commonplace when pandemic lockdowns forced many in-person meetings to take place on laptop screens. 

According to the user agreement, Zoom users granted the firm "a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable licence" to use "customer content" for a variety of purposes such as "machine learning, artificial intelligence, training, [and] testing." This clause does not specify that users must first provide explicit permission for the company to do so.

Quick and ferocious internet criticism of Zoom resulted in some companies, including the news outlet Bellingcat, declaring their intention to stop using Zoom for video conferences. 

Unsurprisingly, Zoom was forced to reply. Within hours of the story going viral on Monday, Zoom Chief Product Officer Smita Hashim wrote a blog post to allay concerns that people might worry that when they're virtually wishing their grandma a happy birthday from thousands of miles away, their likeness and mannerisms will be added to artificial intelligence models.

“As part of our commitment to transparency and user control, we are providing clarity on our approach to two essential aspects of our services: Zoom’s AI features and customer content sharing for product improvement purposes,” Hashim explained. “Our goal is to enable Zoom account owners and administrators to have control over these features and decisions, and we’re here to shed light on how we do that and how that affects certain customer groups.” 

Hashim further stated that Zoom revised its terms of service to clarify the company's data usage guidelines. The clause stating that Zoom has "a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable licence" to use customer data for "machine learning, artificial intelligence, training, [and] testing" was left in tact, but a new clause was added below: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.” 

Although he can understand why Zoom's terms of service struck a nerve, data engineer Jesse Woo pointed out that the sentiment expressed therein—that consumers permit the company to copy and use their content—is actually very common in these kinds of user agreements. The issue with Zoom's policy is that it was structured in such a way that each of the rights granted to the corporation are expressly listed, which can seem like a lot. However, that's also kind of what you might expect to experience while using goods or services in 2023—sorry, welcome to the future!

Discord Upgraded Their Privacy Policy

 

Discord has updated its privacy policy, effective on March 27, 2023. The company has added the previously deleted clauses back in as well as built-in tools that make it easier for users to interact with voice and video content, such as the ability to record and send brief audio or video clips.

Additionally, it promoted the Midjourney AI art-generating server and alleged that more than 3 million servers on the entire Discord network feature some sort of AI experience. This was done to position AI as something that is already well-liked on the site.

Many critics have brought up the recent removal of two phrases from Discord's privacy policy: "We generally do not store the contents of video or voice calls or channels" and "We also don't store streaming content when you share your screen." Many responses express concern about AI tools being developed off of works of art and data that have been collected without people's permission.

It looks like Discord is paying attention to customer concerns because it amended its post about the new AI tools to make it clear that even while its tools are connected to OpenAI, OpenAI may not utilize Discord user data to train its general models.

The three tools Discord is releasing are an AI AutoMod, an AI-generated Conversation Summaries, and a machine-learning version of its mascot Clyde.

Clyde has been reduced, and according to Discord, he can answer questions and have lengthy conversations with you and your friends. Clyde is connected to OpenAI. Moreover, it may suggest playlists and begin server threads. According to Discord, Clyde may access and utilize emoticons and GIFs like any Discord user while communicating with other users.

To help human server moderators, Discord introduced the non-OpenAI version of AutoMod last year. According to Discord, since its launch, "AutoMod has automatically banned more than 45 million unwanted messages from servers before they even had a chance to be posted," according to server policies.

The OpenAI version of AutoMod will similarly search for messages that break the rules, but it will do so while bearing in mind the context of a conversation. The server's moderator will receive a message from AutoMod if it believes a user has submitted something that violates the rules.

Anjney asserted that the company respects the intellectual property of others and demands that everyone utilizing Discord do the same. The company takes these worries seriously and has a strict copyright and intellectual property policy.



 Digital Resignation is Initial Stage of Safeguarding Privacy Online

 

Several internet businesses gather and use our personal information in exchange for access to their digital goods and services. With the use of that data, they can forecast and affect our behavior in the future. Recommendation algorithms, targeted marketing, and individualized experiences are examples of this type of surveillance capitalism.

Many customers are unhappy with these methods, especially after knowing how their data is obtained, despite tech companies' claims that these personalized experiences and advantages improve the user's experience.

Digital resignation refers to the circumstance in which users of digital services continue to do so while being aware that the businesses providing those services are violating their privacy by conducting extensive monitoring, manipulating them, or otherwise negatively affecting their well-being.

The Cambridge Analytica scandal and Edward Snowden's disclosures about widespread government spying shed light on data-collecting techniques, but they also leave individuals feeling helpless and accustomed to the idea that their data will be taken and exploited without their express agreement. Digital resignation is what we call this.

Acknowledging and improving these tactics is the responsibility of both policymakers and businesses. Dealing with data gathering and use alone will not result in corporate accountability for privacy issues.

Our daily lives are completely surrounded by technology. But it's impossible to obtain informed consent when the average person lacks the motivation or expertise necessary to understand confusing terms and conditions rules.

However, the European Union passed regulations that acknowledge these destructive market dynamics and have begun to hold platforms and internet giants accountable. 

With the passage of Law 25, Québec has updated its privacy rules. The purpose of the law is to give people more protection and control over their personal information. It grants individuals the right to seek the transfer of their personal data to another system, its correction or deletion (the right to be forgotten), as well as the right to notice before an automated decision is made.

Additionally, it mandates that businesses designate a privacy officer and committee and carry out privacy impact analyses for any project involving personal data. Also, it is necessary to gain explicit agreement and to communicate terms and rules clearly and transparently. 


Customer Engagement Rethinks After Apple's Data Privacy Rules

 


The changes to Apple's privacy policy last year were one of those events where the worried predictions turned out to be precisely the opposite of what happened – specifically, marketers will have a significant reduction in their ability to target and personalize ads based upon their online behavior, which will have a downstream impact on the social media giants' ad revenues. As a result of these factors, the money that Chief Marketing Officers (CMOs) continue to spend on marketing is becoming less and less effective. 

ROI has plunged by nearly 40% by some measures based on the data available. Marketing professionals are scrambling to keep up with the new environment. As of yet, it has not made a notable difference in the manner in which they behave. 

The marketing community still thinks that we live in an advertising world in which a vast amount of data has been made available. The majority had not yet adopted a policy that they believed would be most beneficial for them. In a post-privacy era, in which marketers are given less and less information about individuals or their digital consumption across a broad range of devices and platforms, marketers must engage with their customers as soon as they show an interest in their products. 

Value exchange

A person cannot be assumed to be an ideal demographic candidate for your product simply by reaching them, especially if your product requires a great deal of consideration. 

It is still imperative to have some exchange of value where marketers give something to customers that they need - something that is more often just more information - as a way to gain their attention and hopefully gain their loyalty in the future. 

It would be impossible to exist in mattress stores or any physical retail store if these requirements were not necessary. There is no doubt that consumers tend to stick with what they know and love, even when it comes to transactions and that is why it is now up to digital marketers to re-create the three-dimensional relationships that still exist in life instead of just online transactions. 

Several aspects of Apple's reformed privacy policy make it apparent that marketers have become far too lazy in many ways. As a result, they had become accustomed to an environment where they could observe signals that would enable them to predict future shopping behavior for every customer they encountered. 

It is crucial to understand that the absence of this world does not mean brands are doomed to fail. To put it simply, it means that they need to come up with original and creative ways of accomplishing their goals, which may even require them to re-learn some old lessons they may have forgotten over the years.   

Microsoft Reveals 65,000 Companies' Data Breach

 

In response to a security breach that left an endpoint freely available over the internet without any authentication, Microsoft this week acknowledged that it unintentionally exposed data related to customers.

The IT giant was contacted on September 24, 2022, when the cybersecurity intelligence company SOCRadar identified the data leak.

2.4 TB of privileged data, such as names, phone numbers, email addresses, company names, and connected files containing information like proof-of-concept documents, sales data, and product orders, may have been exposed due to a compromised Azure Blob Storage, according to SOCRadar, which claims to have informed Microsoft upon its findings.

Microsoft highlighted that there was no security flaw to blame for the B2B leak, which was "generated by an unintended misconfiguration on an endpoint that is not used across the Microsoft ecosystem." However, Microsoft has contested the scope of the problem, claiming that the information in question included names, email addresses, email content, company names, contact numbers, and attached files pertaining to transactions between such a user and Microsoft or an authorized Microsoft partner.

Organizations can find out if their data were exposed thanks to a website called BlueBleed that SOCRadata set up. "According to our study, the leak, known as BlueBleed Part I, contains crucial data that belongs to more than 65,000 companies from 111 countries. So far, the leaks have exposed 548,000 individuals, 133,000 projects, and more than 335,000 emails," as per the SOCRadar researchers. 

Additionally, Redmond highlighted its dissatisfaction with SOCRadar's choice to make a public search function available, claiming that doing so exposes users to unnecessarily high-security risks.

In a follow-up post published on Thursday, SOCRadar compared the BlueBleed search engine to the 'Have I Been Pwned' data breach notification tool, presenting it as a way for businesses to determine whether their data had been compromised in a cloud data leak.

The research company maintains that it did not violate any privacy policies while conducting its investigation and that none of the data it found were saved on its end. According to SOCRadar's VP of Research and CISO Ensar Eker, "No data was downloaded, Some of the data were crawled by our engine, but as we promised to Microsoft, no data has been given so far. All this crawled data was erased from our servers."

Microsoft has not yet made any specific figures concerning the data breach available to the public.


WhatsApp's New Privacy Policy: A Quick Look

 



With the advent of its latest privacy policy, the Facebook-owned messaging app is all set to block certain features if the users won't agree to the new privacy policy.

The update that was initially set to be rolled out by February 8 – making new privacy regulations applicable for all its users, got delayed till May 15 as WhatsApp faced strong contempt from the public, which allowed its competitors namely Telegram and Signal to solidify their repute with the public.

Earlier, as per the ultimatum given by WhatsApp: if the users do not accept the updated privacy policy on May 15, they won't be able to use the app. However, later on, it was said that no accounts will be deleted in case the aforementioned does not happen. 

Giving insights into the new Privacy Policy, a WhatsApp spokesperson said, “Requiring messaging apps to “trace” chats is the equivalent of asking us to keep a fingerprint of every single message sent on WhatsApp, which would break end-to-end encryption and fundamentally undermines people’s right to privacy.”

“We have consistently joined civil society and experts around the world in opposing requirements that would violate the privacy of our users. In the meantime, we will also continue to engage with the Government of India on practical solutions aimed at keeping people safe, including responding to valid legal requests for the information available to us,” the Spokesperson added.

WhatsApp told that it is not imposing its new policy on the users and that they are free to not do so. However, it might involve users deleting their WhatsApp account on their own as the other option than to accept the 2021 update, because they won't be able to access their chat lists or call their contacts via WhatsApp. 

As per WhatsApp's statements, we can deduce that whenever users will access the app, they will be constantly reminded to accept the updated privacy policy to access all its features – eventually making the platform more or less unserviceable to them. 

The users who do accept the updated privacy policy won't witness any key changes in their experience, however, those who continue to have the app installed on their device without accepting the new policy might eventually end up saying goodbye to the app due to its limited serviceability or “inactivity”. 




Signal Taunts WhatsApp as Confusion Looms Large Over its New Privacy Policy

 

WhatsApp will take action against users who have not approved the privacy policy though it will not delete users' accounts instead it will disable certain essential features, as per the announcement. Users are still skeptical about adopting the privacy policy because there isn't enough clarity about what it really means. Meanwhile, Signal, a secure messaging app, has taken full advantage of the ability to draw users to its own site. 

WhatsApp announced a few days before the May 15 deadline, which was dreaded by many, that it would not remove users' accounts if they did not approve the privacy policy by that date. By posting a cheeky update on Twitter today, WhatsApp reminded users that their accounts will not be deleted.

“*checks calendar. pours coffee*. OK. Let’s do this. No, we can’t see your personal messages. No, we won’t delete your account. Yes, you can accept at any time,” WhatsApp wrote on Twitter. 

Signal which is an arch competitor of WhatsApp retweeted the post and wrote, “*checks calendar. pours coffee.* Today’s a great day to switch to privacy.” 

After the announcement of its revised privacy policy, WhatsApp has been bombarded with complaints from users. Users were first notified about it in January with an in-app update, with a deadline of February 8 to approve the privacy policy. 

However, users were outraged by the lack of clarification, and the majority of them moved to other messaging apps such as Signal and Telegram. Users thought WhatsApp would share users' private conversations with Facebook, forcing the company to push back the launch date to May 15. 

The terms and conditions, however, have now been modified. WhatsApp had previously issued users an ultimatum to accept the privacy policy in order to continue using the app, but it has now confirmed that the account would not be deleted. Though WhatsApp may not delete the account, it will deactivate certain features and transform the app into a dummy app. 

WhatsApp told The Guardian in a statement, “After a few weeks of limited functionality, you won’t be able to receive incoming calls or notifications and WhatsApp will stop sending messages and calls to your phone. At that point, users will have to choose: either they accept the new terms, or they are in effect prevented from using WhatsApp at all.”