1 year, 10 months ago
Explained | Are safeguards needed to make AI systems safe?
The story so far: On May 30, the Centre for AI Safety issued a terse statement aimed at opening the discussion around possible existential risks arising out of artificial intelligence. The CAIS’s statement, endorsed by high-profile tech leaders, comes just two weeks after Mr. Altman, along with IBM’s Chief Privacy Office Christina Montgomery and AI scientist Gary Marcus, testified before the U.S. Senate committee on the promises and pitfalls of advances in AI. During the hearing, OpenAI’s co-founder urged lawmakers to intervene and place safeguards to ensure the safety of AI systems. The CAIS aims to mitigate existential risks arising from AI systems that could affect society at large.
Discover Related

1 year, 3 months ago
OpenAI outlines AI safety plan, allowing board to reverse decisions

1 year, 10 months ago
AI Poses Risks At Par With Pandemics & Nuclear Wars, Claim DeepMind, OpenAI, Top Players

1 year, 11 months ago
Joe Biden Meets Google, Microsoft CEOs Over AI Concerns: Report

1 year, 11 months ago
Kamala Harris meets with tech CEOs about artificial intelligence risks

1 year, 11 months ago
Biden, Harris meet with CEOs about AI risks

1 year, 11 months ago
Scientists warn of AI dangers but don’t agree on solutions

1 year, 11 months ago
'Godfather of Artificial Intelligence' Quits Google, Warns About the Dangers of AI

1 year, 11 months ago