AI industry leaders create forum to build powerful tech safely
OpenAI, Microsoft, Alphabet's Google and Anthropic are launching a forum to support safe and responsible development of large machine-learning models, top industry leaders in artificial intelligence said on Wednesday. They are highly capable foundation models that could have dangerous capabilities sufficient to pose severe risks to public safety, industry leaders have warned. While the use cases for such models are plenty, government bodies including the European Union and industry leaders including OpenAI's CEO Sam Altman have said appropriate guardrail measures would be necessary to tackle the risks posed by AI. "This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety," said Anna Makanju, Vice President of Global Affairs at OpenAI.




Discover Related

Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

Job seekers turn to AI tools to gain a competitive edge. It can also backfire

From the India Today Archives (2024): How India can be at the forefront of AI

Microsoft To Soon Let Users Tailor Copilot to Their Needs

Microsoft’s AI division head wants to create a lasting relationship between chatbots and their users

The Tools of Tomorrow: What Lies Ahead with the AI Revolution

AI agents are a moment of truth for tech

Databricks and Anthropic partner to help companies build AI agents

Australian AI startup is creating fake victims to fool real scammers

E-governance set to get AI push in UP

How AI is reshaping healthcare: Insights from experts live on The Hindu webinar

Agentic AI: The next frontier in artificial intelligence

TCS, Infosys hop onto Adobe’s new platform to sell AI services to clients

AI seen as key to high-quality growth

AI seen as key to high-quality growth

Trump's call for AI deregulation gets strong backing from Big Tech

Big Tech struggles to sell AI as industry wary of copyright infringements

India AI Mission: How we're poised to ride next innovation wave

Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models
