Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Open AI. Show all posts

WordPress and Tumblr Intends to Sell User Content to AI Firms

 

Automattic, the parent company of websites like WordPress and Tumblr, is in negotiations to sell training-related content from its platforms to AI firms like MidJourney and OpenAI. Additionally, Automattic is trying to reassure users that they can opt-out at any time, even if the specifics of the agreement are yet unknown, according to a new report from 404 Media. 

404 reports Automattic is experiencing internal disputes because private content not intended for the firm to save was among the items scrapped for AI companies. Further complicating matters, it was discovered that adverts from an earlier Apple Music campaign, as well as other non-Automatic commercial items, had made their way into the training data set. 

Generative AI has grown in popularity since OpenAI introduced ChatGPT in late 2022, with a number of companies quickly following suit. The system works by being "trained" on massive volumes of data, allowing it to generate videos, images, and text that appear to be original. However, big publishers have protested, and some have even filed lawsuits, claiming that most of the data used to train these systems was either pirated or does not constitute "fair use" under existing copyright regimes. 

Automattic intends to offer a new setting that would allow users to opt out of training AI systems, however it is unclear if the setting will be enabled or disabled by default for the majority of users. Last year, WordPress competitor Squarespace launched a similar choice that allows you to opt out of having your data used to train AI.

In response to emailed questions, Automattic directed local media to a new post that basically confirmed 404 Media's story, while also attempting to pitch the move to users as a chance to "give you more control over the content you've created.”

“AI is rapidly transforming nearly every aspect of our world, including the way we create and consume content. At Automattic, we’ve always believed in a free and open web and individual choice. Like other tech companies, we’re closely following these advancements, including how to work with AI companies in a way that respects our users’ preferences,” the blog post reads.

However, the lengthy statement comes across as incredibly defensive, noting that "no law exists that requires crawlers to follow these preferences," and implying that the company is simply following industry best practices by giving users the option of whether or not they want their content employed for AI training.

ChatGPT Faces Data Protection Questions in Italy

 


OpenAI's ChatGPT is facing renewed scrutiny in Italy as the country's data protection authority, Garante, asserts that the AI chatbot may be in violation of data protection rules. This follows a previous ban imposed by Garante due to alleged breaches of European Union (EU) privacy regulations. Although the ban was lifted after OpenAI addressed concerns, Garante has persisted in its investigations and now claims to have identified elements suggesting potential data privacy violations.

Garante, known for its proactive stance on AI platform compliance with EU data privacy regulations, had initially banned ChatGPT over alleged breaches of EU privacy rules. Despite the reinstatement after OpenAI's efforts to address user consent issues, fresh concerns have prompted Garante to escalate its scrutiny. OpenAI, however, maintains that its practices are aligned with EU privacy laws, emphasising its active efforts to minimise the use of personal data in training its systems.

"We assure that our practices align with GDPR and privacy laws, emphasising our commitment to safeguarding people's data and privacy," stated the company. "Our focus is on enabling our AI to understand the world without delving into private individuals' lives. Actively minimising personal data in training systems like ChatGPT, we also decline requests for private or sensitive information about individuals."

In the past, OpenAI confirmed fulfilling numerous conditions demanded by Garante to lift the ChatGPT ban. The watchdog had imposed the ban due to exposed user messages and payment information, along with ChatGPT lacking a system to verify users' ages, potentially leading to inappropriate responses for children. Additionally, questions were raised about the legal basis for OpenAI collecting extensive data to train ChatGPT's algorithms. Concerns were voiced regarding the system potentially generating false information about individuals.

OpenAI's assertion of compliance with GDPR and privacy laws, coupled with its active steps to minimise personal data, appears to be a key element in addressing the issues that led to the initial ban. The company's efforts to meet Garante's conditions signal a commitment to resolving concerns related to user data protection and the responsible use of AI technologies. As the investigation takes its stride, these assurances may play a crucial role in determining how OpenAI navigates the challenges posed by Garante's scrutiny into ChatGPT's data privacy practices.

In response to Garante's claims, OpenAI is gearing up to present its defence within a 30-day window provided by Garante. This period is crucial for OpenAI to clarify its data protection practices and demonstrate compliance with EU regulations. The backdrop to this investigation is the EU's General Data Protection Regulation (GDPR), introduced in 2018. Companies found in violation of data protection rules under the GDPR can face fines of up to 4% of their global turnover.

Garante's actions underscore the seriousness with which EU data protection authorities approach violations and their willingness to enforce penalties. This case involving ChatGPT reflects broader regulatory trends surrounding AI systems in the EU. In December, EU lawmakers and governments reached provisional terms for regulating AI systems like ChatGPT, emphasising comprehensive rules to govern AI technology with a focus on safeguarding data privacy and ensuring ethical practices.

OpenAI's cooperation and its ability to address concerns regarding personal data usage will play a pivotal role. The broader regulatory trends in the EU indicate a growing emphasis on establishing comprehensive guidelines for AI systems, addressing data protection and ethical considerations. For readers, understanding these developments determines the importance of compliance with data protection regulations and the ongoing efforts to establish clear guidelines for AI technologies in the EU.



Google DeepMind Cofounder Claims AI Can Play Dual Role in Next Five Years

 

Mustafa Suleyman, cofounder of DeepMind, Google's AI group, believes that AI will be able to start and run its own firm within the next five years.

During a discussion on AI at the 2024 World Economic Forum, the now-CEO of Inflection AI was asked how long it will take AI to pass a Turing test-style exam. Passing would suggest that the technology has advanced to human-like capabilities known as AGI, or artificial general intelligence. 

In response, Suleyman stated that the modern version of the Turing test would be to determine whether an AI could operate as an entrepreneur, mini-project manager, and creator capable of marketing, manufacturing, and selling a product for profit. 

He seems to expect that AI will be able to demonstrate those business-savvy qualities before 2030—and inexpensively.

"I'm pretty sure that within the next five years, certainly before the end of the decade, we are going to have not just those capabilities, but those capabilities widely available for very cheap, potentially even in open source," Suleyman stated in Davos, Switzerland. "I think that completely changes the economy.”

The AI leader's views are just one of several forecasts Suleyman has made concerning AI's societal influence as technologies like OpenAI's ChatGPT gain popularity. Suleyman told CNBC at Davos last week that AI will eventually be a "fundamentally labor-replacing" instrument.

In a separate interview with CNBC in September, he projected that within the next five years, everyone will have AI assistants that will enhance productivity and "intimately know your personal information.” "It will be able to reason over your day, help you prioritise your time, help you invent, be much more creative," Suleyman stated. 

Still, he stated on the 2024 Davos panel that the term "intelligence" in reference to AI remains a "pretty unclear, hazy concept." He calls the term a "distraction.” 

Instead, he argues that researchers should concentrate on AI's real-world capabilities, such as whether an AI agent can communicate with humans, plan, schedule, and organise.

People should move away from the "engineering research-led exciting definition that we've used for 20 years to excite the field" and "actually now focus on what these things can do," Suleyman advised.

Open AI Moves to Minimize Regulatory Risk on Data Privacy in EU

 

While the majority of the world was celebrating the arrival of 2024, it was back to work for ChatGPT's parent company, OpenAI. 

After being investigated for violating people's privacy, the firm is believed to be rushing against the clock to do everything in its capacity to limit the regulatory risk in the EU. This is the primary reason why the company has returned to work on amending its terms and conditions. 

With a line of investigations in place to combat data protection issues concerning how chatbots process user data and how they produce data in general, including those coming from top watchdogs in the region, ChatGPT's powerful AI offering was accused of negatively impacting users' privacy. 

Things even got bad enough for Italy to temporarily halt the AI tool after determining that the company needed to modify some data and the degree of control granted to users generally. 

Now, OpenAI is sending out emails detailing how it has modified its ChatGPT service in the regions where the most concerns have arisen. They have made clear which entity, as stated in their privacy policy, is in charge of processing and regulating personal data.

The latest terms established the firm's Dublin subsidiary as the primary regulator for user data across the EEA region, including Switzerland. 

The company claimed that this would be effective as early as next month. If there is any disagreement on the matter, users are advised to delete their OpenAI accounts immediately. More discussion was conducted about how the GDPR's OSS would be implemented for firms processing EU data in order to better coordinate privacy oversights through a single supervisory body operating in the EU. 

The likelihood that privacy watchdogs operating in other parts of the world will take action on these issues is made less likely by such a status. They would have to go the path previously. The supervisor of the main firm can now receive complaints from them and address any issues. 

If an immediate risk arises, GDPR regulators would maintain the authority to intervene through local means. This year, we saw the company establish an office in Ireland's capital and hire numerous professionals for senior legal and privacy positions. However, the majority of the company's open roles are still in the United States. 

However, due to Brexit, the company's users in the United Kingdom are excluded from the entire legal basis on which OpenAI's transfer to Ireland operates. Since its inception, the EU's GDPR has failed to function and apply to those in the United Kingdom. 

A lot is going on here, and it will be interesting to see how the change in OpenAI's terms affects the regulatory risk at its peak in the EU.

OpenAI Employee Claims Prompt Engineering is Not the Skill of the Future

 

If you're a prompt engineer — a master at coaxing AI models behind products like ChatGPT to produce the best results — you could earn well over six figures. However, an OpenAI employee claims that the talent is not as groundbreaking as it claims. 

"Hot take: Many believe prompt engineering is a skill one must learn to be competitive in the future," Logan Kilpatrick, a developer advocate at OpenAI, wrote on X, formerly known as Twitter, earlier this week. "The reality is that prompting AI systems is no different than being an effective communicator with other humans.” 

While prompt engineering is becoming increasingly popular, the three underlying skills that will genuinely matter in 2024, according to the OpenAI employee, are reading, writing, and speaking. Honing these skills will provide humans a competitive advantage against highly intelligent machines in the future as AI technology advances. 

"Focusing on the skills necessary to effectively communicate with humans will future proof you for a world with AGI," he stated. Artificial general intelligence, or AGI, is the capacity of AI to carry out difficult cognitive tasks like making independent decisions on par with human performance. 

Some X users responded to Kilpatrick's post by stating that conversing with AI could actually improve human communication skills.

"Lots of people could learn a great deal about interpersonal communication simply by spending time with these AI systems and learning to work well with them," a user on X noted. After gaining prompt engineering abilities, another X user said that they have improved as a "better communicator and manager". 

Additionally, some believe that improving interaction between humans and machines is essential to improving AI's reaction. 

"Seems quite obvious that talking to/persuading/eliciting appropriate knowledge out of AI's will be as nuanced, important, and as much of an acquired skill as doing the same with humans," Neal Khosla, whose X bio says he's the CEO of an AI startup, commented in response to Kilpatrick. 

The OpenAI employee's views on prompt engineering come as researchers and AI experts alike seek new ways for users to communicate with ChatGPT in order to achieve the best results. The skill comes as ChatGPT users begin to incorporate the AI chatbot into their personal and professional lives. 

A study published in November discovered that using emotional language like "This is very important to my career" when talking to ChatGPT leads to enhanced responses. According to AI experts, assigning ChatGPT a specific job and conversing with the chatbot in courteous, direct language can produce the best outcomes.

Google Eases Restrictions: Teens Navigate Bard with Guardrails

 


It has been announced that Google is planning on allowing teens in most countries to use a chatbot called Bard which is based on artificial intelligence and possesses some guardrails. It has been announced that on Thursday, Google will begin opening up access to Bard (also known as Google Play for Teens) to teenagers in most countries around the world, according to Tulsee Doshi, Head of Product, Responsible AI at Google. 

A chatbot can be accessed by teens who meet the minimum age requirement to manage their own Google Account as well as those who meet the minimum age requirement to manage their own account in a variety of languages in the future. The expanded launch will come with a number of safety features and guardrails designed to prevent teens from accessing harmful content.

According to a blog post by Google, teenagers can use the search giant's new tool to find inspiration, learn new hobbies and find solutions to everyday problems, and teens can text Bard with questions about anything from important topics, such as where to apply to college, to more fun matters, such as learning an entirely new sport.

According to Google, the platform is also a valuable learning tool, as it enables teens to dig deeper and learn more about topics, while developing their understanding of complex concepts in the process. In addition to finding inspiration, teens can use Bard to discover new hobbies and solve problems they face every day. 

Additionally, Bard can be a useful learning tool for teens, giving them an opportunity to go deeper into topics, gain a better understanding of complex concepts, and practice new skills in ways that have proven successful for them. 

There are safety features in place at Bard so that users are protected against unsafe content being displayed in its responses to teens, such as illegal substances or those that are age-gated. This training is intended to help Bard recognize matters that are inappropriate to teens.

As soon as teenagers start to ask fact-based questions on Google for the first time, Google will run a feature called double-checking a response, which is intended to ensure that substantiation of Bard’s answer can be found across the web. 

To help students develop information literacy and critical thinking skills, Bard will actively and continuously encourage them to use double-check. There are plans by Google to make everyone aware of how large language models (LLMs) can hallucinate, and they plan to make this feature available to everyone who uses Bard. 

A "tailored onboarding experience" will also be offered to teens which provides them with a link to Google's AI Literacy Guide, along with a video explaining how to use generative AI responsibly, and a description of how Bard Activity is used with the option to turn it on or off. In addition, Google has also announced that it will be bringing a math learning experience to Bard's campus, which will allow anyone, including teens, to type or upload a picture of a math equation. 

Instead of just providing the answer, this will allow Bard to explain exactly how it's solved step by step. In order to protect users, Google has put in place some guardrails that will make it easier for them to access the chatbot. 

Aside from being trained to recognize topics that are inappropriate for teens, Bard also comes equipped with guardrails to ensure that unsafe content is not displayed in its responses to teens, such as illegal substance use or age-gated drugs. 

In addition to some of the new features that are going to be added to Bard, the company is also adding some features that are likely to be very helpful to teens, but that everyone else can use too. It is possible to use Bard to explain how to solve a math problem when you type in it or upload a picture of it. 

In recent years, Google has been improving the quality of Search for homework. In addition, Bard will be able to create charts based on information provided in a prompt. It is not surprising that Google is releasing Bard for teens at the same time that social platforms have launched AI chatbots to young users to mixed reviews. 

A Snapchat chatbot launched in February without appropriate age-gating features and faced controversy after it was discovered that it was chatting to minors about issues such as covering up the smell of weed and setting the mood for sexual activity. Snapchat faced controversy for launching its "My AI" chatbot without appropriate age-gating features. 

It is now available in over 230 countries and territories, including the United States, the United Kingdom, and Australia. After the tool's limited early access launch in the US and the UK in February (where the company made an embarrassing factual error due to hallucinations), the company announced Bard in February. As Google tries to compete with chatbots like Microsoft's recently rebranded Bing Chat, now titled Copilot, and OpenAI's ChatGPT, it has also added a bunch of new features to Bard.

Gen Z's Take on AI: Ethics, Security, and Career

Generation Z is leading innovation and transformation in the fast-changing technological landscape. Gen Z is positioned to have an unparalleled impact on how work will be done in the future thanks to their distinct viewpoints on issues like artificial intelligence (AI), data security, and career disruption. 

Gen Z is acutely aware of the ethical implications of AI. According to a recent survey, a significant majority expressed concerns about the ethical use of AI in the workplace. They believe that transparency and accountability are paramount in ensuring that AI systems are used responsibly. This generation calls for a balance between innovation and safeguarding individual rights.

AI in Career Disruption: Navigating Change

For Gen Z, the rapid integration of AI in various industries raises questions about job stability and long-term career prospects. While some view AI as a threat to job security, others see it as an opportunity for upskilling and specialization. Many are embracing a growth mindset, recognizing that adaptability and continuous learning are key to thriving in the age of AI.

Gen Z and the AI Startup Ecosystem

A noteworthy trend is the surge of Gen Z entrepreneurs venturing into the AI startup space. Their fresh perspectives and digital-native upbringing give them a unique edge in understanding the needs of the tech-savvy consumer. These startups drive innovation, push boundaries, and redefine industries, from healthcare to e-commerce.

Economic Environment and Gen Z's Resilience

Amidst economic challenges, Gen Z has demonstrated remarkable resilience. A recent study by Bank of America highlights that 73% of Gen Z individuals feel that the current economic climate has made it more challenging for them. However, this generation is not deterred; they are leveraging technology and entrepreneurial spirit to forge their own paths.

The McKinsey report underscores that Gen Z's relationship with technology is utilitarian and deeply integrated into their daily lives. They are accustomed to personalized experiences and expect the same from their work environments. This necessitates a shift in how companies approach talent acquisition, development, and retention.

Gen Z is a generation that is ready for transformation, as seen by their interest in AI, data security, and job disruption. Their viewpoints provide insightful information about how businesses and industries might change to meet the changing needs of the digital age. Gen Z will likely have a lasting impact on technology and AI as it continues to carve its path in the workplace.


Guidelines on What Not to Share with ChatGPT: A Formal Overview

 


A simple device like ChatGPT has unbelievable power, and it has revolutionized our experience of interacting with computers in such a profound way. There are, however, some limitations that it is important to understand and bear in mind when using this tool. 

Using ChatGPT, OpenAI has seen a massive increase in revenue resulting from a massive increase in content. There were 10 million dollars of revenue generated by the company every year. It, however, grew from 1 million dollars in to 200 million dollars in the year 2023. In the coming years, the revenue is expected to increase to over one billion dollars by the end of 2024, which is even higher than what it is now. 

A wide array of algorithms is included in the ChatGPT application that is so powerful that it is capable of generating any text the users want, from a simple math sum to a complex rocket theory question. It can do them all and more! It is crucial to acknowledge the advantages that artificial intelligence can offer and to acknowledge their shortcomings as the prevalence of chatbots powered by artificial intelligence continues to rise.  

To be successful with AI chatbots, it is essential to understand that there are certain inherent risks associated with their use, such as the potential for cyber attacks and privacy issues.  A major change in Google's privacy policy recently made it clear that the company is considering providing its AI tools with the data that it has collected from web posts to train those models and tools.  

It is equally troubling that ChatGPT retains chat logs to improve the model and to improve the uptime of the service. Despite this, there is still a way to address this concern, and it involves not sharing certain information with chatbots that are based on artificial intelligence. Jeffrey Chester, executive director of the Center for Digital Democracy, an organization dedicated to digital rights advocacy stated these tools should be viewed by consumers with suspicion at least, since as with so many other popular technologies – they are all heavily influenced by the marketing and advertising industries.  

The Limits Of ChatGPT 


As the system was not enabled for browsing (which is a requirement for ChatGPT Plus), it generated responses based on the patterns and information it learned throughout its training, which included a range of internet texts while it was training until September 2021 when the training cut-off will be reached.  

Despite that, it is incapable of understanding the context in the same way as people do and does not know anything in the sense of "knowing" anything. ChatGPT is famous for its impressive and relevant responses a great deal of the time, but it is not infallible. The answers that it produces can be incorrect or unintelligible for several reasons. 

Its proficiency largely depends on the quality and clarity of the prompt given. 

1. Banking Credentials 


The Consumer Financial Protection Bureau (CFPB) published a report on June 6 about the limitations of chatbot technology as the complexity of questions increases. According to the report, implementing chatbot technology could result in financial institutions violating federal consumer protection laws, which is why the potential for violations of federal consumer protection laws is high. 

According to the Consumer Financial Protection Bureau (CFPB), the number of consumer complaints has increased due to a variety of issues that include resolving disputes, obtaining accurate information, receiving good customer service, seeking assistance from human representatives, and maintaining personal information security. In light of this fact, the CFPB advises financial institutions to refrain from solely using chatbots as part of their overall business model.  

2. Personal Identifiable Information (PII). 


Whenever users share sensitive personal information that can be used to identify users personally, they need to be careful to protect their privacy and minimise the risk that it will be misused. The user's full name, home address, social security number, credit card number, and any other information that can identify them as an individual is included in this category. The importance of protecting these sensitive details is paramount to ensuring their privacy and preventing potential harm from unauthorised use. 

3. Confidential information about the user's workplace


Users should exercise caution and refrain from sharing private company information when interacting with AI chatbots. It is crucial to understand the potential risks associated with divulging sensitive data to these virtual assistants. 

Major tech companies like Apple, Samsung, JPMorgan, and Google have even implemented stringent policies to prohibit the use of AI chatbots by their employees, recognizing the importance of protecting confidential information. 

A recent Bloomberg article shed light on an unfortunate incident involving a Samsung employee who inadvertently uploaded confidential code to a generative AI platform while utilizing ChatGPT for coding tasks. This breach resulted in the unauthorized disclosure of private information about Samsung, which subsequently led to the company imposing a complete ban on the use of AI chatbots. 

Such incidents highlight the need for heightened vigilance and adherence to security measures when leveraging AI chatbots. 

4. Passwords and security codes 


In the event that a chatbot asks you for passwords, PINs, security codes, or any other confidential access credentials, do not give them these things. It is prudent to prioritise your safety and refrain from sharing sensitive information with AI chatbots, even though these chatbots are designed with privacy in mind. 

For your accounts to remain secure and for your personal information to be protected from the potential of unauthorised access or misuse, it is paramount that you secure your passwords and access credentials.

In an age marked by the progress of AI chatbot technology, the utmost importance lies in the careful protection of personal and sensitive information. This report underscores the imperative necessity for engaging with AI-driven virtual assistants in a responsible and cautious manner, with the primary objective being the preservation of privacy and the integrity of data. It is advisable to remain well-informed and to exercise prudence when interacting with these potent technological tools.