
Stanford researchers find biggest AI models rank low on transparency
The HinduYesterday, Stanford’s Center for Research on Foundational Models or CRFM released the Foundation Model Transparency Index to measure just how open tech companies are about the details of their large language models. We’ve seen deceptive ads and pricing across the internet, unclear wage practices in ride-sharing, dark patterns tricking users into unknowing purchases, and myriad transparency issues around content moderation that have led to a vast ecosystem of mis and disinformation on social media. Open-source models like Meta AI’s Llama 2 and Hugging Face’s BloomZ models led the way with 54% and 53% respectively, while OpenAI’s benchmark model, GPT-4 stood third with 48% still ahead of Stability AI’s Stable Diffusion model with 47% and Google’s PaLM 2 at 40%. Bommasani, noted, “This is a pretty clear indication of how these companies compare to their competitors, and we hope will motivate them to improve their transparency.” Percy Liang, an associate professor with Stanford and president of CRFM who worked on the team added that model transparency is important for “advancing AI policy initiatives and ensuring that upstream and downstream users in the industry and academia have the information to make informed decisions.”
History of this topic

Stanford AI Index shows more AI models open-source now, 52% people nervous about AI
The HinduDiscover Related
































