1 year, 10 months ago

Explained | Are safeguards needed to make AI systems safe?

The story so far: On May 30, the Centre for AI Safety issued a terse statement aimed at opening the discussion around possible existential risks arising out of artificial intelligence. The CAIS’s statement, endorsed by high-profile tech leaders, comes just two weeks after Mr. Altman, along with IBM’s Chief Privacy Office Christina Montgomery and AI scientist Gary Marcus, testified before the U.S. Senate committee on the promises and pitfalls of advances in AI. During the hearing, OpenAI’s co-founder urged lawmakers to intervene and place safeguards to ensure the safety of AI systems. The CAIS aims to mitigate existential risks arising from AI systems that could affect society at large.

Discover Related