Hackers broke into Azure OpenAI, generated tonnes of ‘harmful’ content, claims Microsoft
While Microsoft has not disclosed the exact nature of the content created by the cybercriminals, it confirmed that it violated the company’s policies and terms of service In response to the breach, Microsoft has implemented additional security measures and safety mitigations to safeguard Azure OpenAI from future attacks. Image Credit: Reuters Microsoft has filed a lawsuit after a group of cybercriminals allegedly bypassed security guardrails on its Azure OpenAI platform, using the service to generate harmful and offensive content. These cybercriminals, a foreign-based threat group, are accused of stealing customer credentials and using custom-designed software to gain unauthorised access to Microsoft’s generative AI services, including ChatGPT and DALL-E. How the hackers gained access Azure OpenAI is a service that allows businesses to integrate powerful OpenAI tools into their own cloud applications. Microsoft’s efforts to enhance security In response to the breach, Microsoft has implemented additional security measures and safety mitigations to safeguard Azure OpenAI from future attacks.
Discover Related

Did DeepSeek steal OpenAI data for training? Microsoft begins probe: Report

OpenAI claims to have foiled China-backed election interference, phishing attacks

OpenAI Data Breach: Hackers Stole Important Details About The Company In 2023

OpenAI’s internal AI details stolen in 2023 breach: Report

Hacker infiltrated OpenAI’s messaging system and ‘stole details’ about AI tech: Report

US antitrust enforcers will investigate leading AI companies Microsoft, Nvidia and OpenAI

OpenAI has stopped five attempts to misuse its AI for 'deceptive activity'

ChatGPT maker OpenAI partners with US military in big AI usage policy revamp

Microsoft infuses billions of dollars in ChatGPT developer OpenAI

Today’s Cache | Microsoft layoffs, ChatGPT gets smarter, and Apple’s AR glasses on hold
