Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI image generation. Show all posts

Microsoft Employee Raises Alarms Over Copilot Designer and Urges Government Intervention

 

Shane Jones, a principal software engineering manager at Microsoft, has sounded the alarm about the safety of Copilot Designer, a generative AI tool introduced by the company in March 2023. 

His concerns have prompted him to submit a letter to both the US Federal Trade Commission (FTC) and Microsoft's board of directors, calling for an investigation into the text-to-image generator. Jones's apprehension revolves around Copilot Designer's unsettling capacity to generate potentially inappropriate images, spanning themes such as explicit content, violence, underage drinking, and drug use, as well as instances of political bias and conspiracy theories. 

Beyond highlighting these concerns, he has emphasized the critical need to educate the public, especially parents and educators, about the associated risks, particularly in educational settings where the tool may be utilized. Despite Jones's persistent efforts over the past three months to address the issue internally at Microsoft, the company has not taken action to remove Copilot Designer from public use or implement adequate safeguards. His recommendations, including the addition of disclosures and adjustments to the product's rating on the Android app store, were not implemented by the tech giant. 

Microsoft responded to the concerns raised by Jones, assuring its commitment to addressing employee concerns within the framework of company policies. The company expressed appreciation for efforts aimed at enhancing the safety of its technology. However, the situation underscores the internal challenges companies may face in balancing innovation with the responsibility of ensuring their technologies are safe and ethical. 

This incident isn't the first time Jones has spoken out about AI safety concerns. Despite facing pressure from Microsoft's legal team, Jones persisted in voicing his concerns, even extending his efforts to communicate with US senators about the broader risks associated with AI safety. The case of Copilot Designer adds to the ongoing scrutiny of AI technologies in the tech industry. Google recently paused access to its image generation feature on Gemini, its competitor to OpenAI's ChatGPT, after facing complaints about historically inaccurate images involving race. 

DeepMind, Google's AI division, reassured users that the feature would be reinstated after addressing the concerns and ensuring responsible use of the technology. As AI technologies become increasingly integrated into various aspects of our lives, incidents like the one involving Copilot Designer highlight the imperative for vigilant oversight and ethical considerations in AI development and deployment. The intersection of innovation and responsible AI use remains a complex landscape that necessitates collaboration between tech companies, regulatory bodies, and stakeholders to ensure the ethical and safe evolution of AI technologies.

AI Image Generation Breakthrough Predicted to Trigger Surge in Deepfakes

 

A recent publication by the InstantX team in Beijing introduces a novel AI image generation method named InstantID. This technology boasts the capability to swiftly identify individuals and generate new images based on a single reference image. 

Despite being hailed as a "new state-of-the-art" by Reuven Cohen, an enterprise AI consultant, concerns arise regarding its potential misuse for creating deepfake content, including audio, images, and videos, especially as the 2024 election approaches.

Cohen highlights the downside of InstantID, emphasizing its ease of use and ability to produce convincing deepfakes without the need for extensive training or fine-tuning. According to him, the tool's efficiency in generating identity-preserving content could lead to a surge in highly realistic deepfakes, requiring minimal GPU and CPU resources.

In comparison to the prevalent LoRA models, InstantID surpasses them in identifiable AI image generation. Cohen, in a LinkedIn post, bids farewell to LoRA, dubbing InstantID as "deep fakes on steroids." 

The team's paper, titled "InstantID: Zero-shot Identity-Preserving Generation in Seconds," asserts that InstantID outperforms techniques like LoRA by offering a 'plug and play module' capable of handling image personalization with just a single facial reference image, ensuring high fidelity without the drawbacks of storage demands and lengthy fine-tuning processes.

Cohen elucidates that InstantID specializes in zero-shot identity-preserving generation, distinguishing itself from LoRA and its extension QLoRA. While LoRA and QLoRA focus on fine-tuning models, InstantID prioritizes generating outputs that maintain the identity characteristics of the input data efficiently and rapidly.

The simplicity of creating AI deepfakes is underscored by InstantID's primary functionality, which centers on preserving identity aspects in the generated content. Cohen warns that the tool makes it exceedingly easy to engineer deepfakes, requiring only a single click to deploy on platforms like Hugging Face or replicate.