AI must not become a driver of human rights abuses
1 year, 6 months ago

AI must not become a driver of human rights abuses

Al Jazeera  

It is the responsibility of AI companies to ensure their products do not facilitate violations of human rights. When internet search tools, social media, and mobile technology were first released, and as they grew in widespread adoption and accessibility, it was nearly impossible to predict many of the distressing ways that these transformative technologies became drivers and multipliers of human rights abuses around the world. Learning from these developments, the human rights community is calling on companies developing Generative AI products to act immediately to stave off any negative consequences for human rights they may have. In the absence of regulation to prevent and mitigate the potentially dangerous effects of Generative AI, human rights organisations should take the lead in identifying actual and potential harm. This means that human rights organisations should themselves help to build a body of deep understanding around these tools and develop research, advocacy, and engagement that anticipate the transformative power of Generative AI.

History of this topic

Can Nasscom’s ‘Responsible AI’ guidelines pave the way for ethical and safe adoption of artificial intelligence?
1 year, 6 months ago
US officials seek to crack down on harmful AI products
1 year, 7 months ago
UN calls for moratorium on Artificial Intelligence tech that threatens human rights
3 years, 3 months ago
UN urges moratorium on use of AI that imperils human rights
3 years, 3 months ago

Discover Related