DeepMind Calls for Ethical Evaluation Framework for AI Systems
In the realm of artificial intelligence (AI) research and development, DeepMind, the renowned AI lab owned by Google, has put forth a proposal for a framework that would assess the societal and ethical risks associated with AI systems. This proposal comes at a crucial time, as the U.K. government is set to organize the AI Safety Summit, dedicated to managing risks posed by advancements in AI technology.
Key Takeaway
DeepMind, the Google-owned AI lab, has proposed a framework for evaluating and auditing the ethical risks of AI systems. This comes ahead of the AI Safety Summit, where international stakeholders will discuss managing risks associated with advancements in AI. DeepMind’s proposal emphasizes the need for transparency and involvement of various stakeholders. However, the lab’s association with Google raises questions about the extent of its commitment to transparency practices.
The framework, outlined in a recently released paper by DeepMind, emphasizes the need for involvement from various stakeholders, including AI developers, app developers, and the wider public in evaluating and auditing the ethical implications of AI. DeepMind suggests that AI systems should be examined at the “point of human interaction” to understand their potential uses and societal impacts.
However, it’s important to scrutinize DeepMind’s proposals in light of a study conducted by Stanford researchers, which evaluated the transparency of major AI models. The study ranked ten models based on factors such as the disclosure of training data sources, information about hardware usage, and details regarding the labor involved in training. One of Google’s text-analyzing AI models, PaLM 2, scored a disappointing 40% in the transparency assessment.
Although DeepMind did not directly develop PaLM 2, it’s worth noting that the lab has not consistently prioritized transparency in its own models. This raises questions about the extent to which DeepMind is committed to transparency and ethical practices, especially considering its association with Google, which falls short in crucial transparency measures.
Other Noteworthy AI Developments
- Microsoft study reveals flaws in GPT-4, including its susceptibility to generating biased and toxic text.
- OpenAI introduces web searching feature to ChatGPT and transitions DALL-E 3 into beta.
- Open-source alternatives LLaVA-1.5 and Fuyu-8B challenge OpenAI’s GPT-4V by offering similar capabilities for free.
- Software engineer trains a reinforcement learning algorithm to play the classic game Pokémon, showcasing the algorithm’s continuous improvement.
- Google Search launches a feature to help people practice and improve their English speaking skills, competing with language-learning platform Duolingo.
- Amazon tests Agility’s bipedal robot, Digit, in its facilities, while already utilizing over 750,000 robotic systems.
- Nvidia demonstrates the use of large language models in guiding AI-driven robots, while Meta releases Habitat 3.0, a dataset for training AI agents in realistic indoor environments.
- Chinese startup Zhipu AI secures significant funding to develop AI models to rival those of OpenAI.
- Biden administration imposes further restrictions on Nvidia’s AI chip shipments to China as part of measures to curb Beijing’s military ambitions.
- TikTok accounts use AI to generate renditions of pop songs performed by animated characters such as Homer Simpson, highlighting a curious trend with underlying concerns.
Advancements in Machine Learning
Machine learning models continue to revolutionize various fields. AlphaFold and RoseTTAFold have demonstrated the potential of AI in solving complex problems related to protein folding. Now, David Baker and his team have developed RoseTTAFold All-Atom, which expands the prediction process to include interactions between proteins and other molecules, enabling a better understanding of biological systems.
The integration of visual AI technologies in scientific research is also making significant progress. The SmartEM project, a collaboration between MIT and Harvard, combines computer vision and machine learning to enable intelligent scanning electron microscope examination of specimens. This system can intelligently focus on important areas, avoid irrelevant ones, and label resulting images accordingly.
AI is also proving valuable in archaeological research. Utilizing lidar technology, researchers uncover hidden Mayan cities and highways, while AI fills in gaps in ancient Greek texts. In a remarkable achievement, a burned and rolled-up papyrus scroll, believed to be destroyed during the Pompeii eruption, has been reconstructed using AI-powered image amplification techniques. Although the scroll currently reveals only the word “purple,” it represents a remarkable advancement in deciphering ancient texts.
Furthermore, AI has the potential to assist in improving the quality of content on platforms such as Wikipedia. An AI system can suggest and vet citations, helping address articles with insufficient citations or uncertain factual accuracy.
Advancements in language models have also made their mark in the field of mathematics. Llemma, an open model trained on mathematical proofs and papers, can solve complex problems and competes with private models developed by companies like Google Research. The progress of AI in reading minds, as demonstrated by Meta’s brain decoding paper, shows the potential for reconstructing visual perception based on high-frequency brain scans, although generative AI plays a significant role in this process.
Lastly, an ambitious project called CLARA aims to enhance language models’ understanding of the nuances of human speech. By leveraging a library of audio and text in multiple languages, CLARA aims to improve language models’ ability to interpret emotional states and non-verbal cues, enhancing human-AI interaction.