Search on TFTC
‘AI Doomer’ Paul Christiano Appointed to Lead US AI Safety Efforts

‘AI Doomer’ Paul Christiano Appointed to Lead US AI Safety Efforts

Apr 17, 2024
AI

‘AI Doomer’ Paul Christiano Appointed to Lead US AI Safety Efforts

The US AI Safety Institute, under the National Institute of Standards and Technology (NIST), has appointed Paul Christiano as its head of AI safety. Christiano, a former OpenAI researcher known for developing the AI safety technique called reinforcement learning from human feedback (RLHF), has sparked debate due to his prediction that there is a "50 percent chance AI development could end in 'doom.'"

Concerns have been raised within NIST about Christiano's views, particularly his association with the concepts of effective altruism and longtermism, which some staffers believe could compromise the institute's objectivity and integrity. A report from VentureBeat last month suggested that some NIST employees were considering resignation over the appointment. However, there have been no public announcements confirming any such actions.

Christiano's role will include designing and conducting tests of AI models with national security concerns, implementing risk mitigations, and steering evaluation processes. His experience includes founding the Alignment Research Center, a nonprofit focused on aligning AI with human interests.

Despite the internal dissent, some experts endorse Christiano's appointment. Divyansh Kaushik of the Federation of American Scientists praised Christiano's qualifications on the social platform X but acknowledged the seriousness of the rumored staff opposition at NIST.

The US Secretary of Commerce, Gina Raimondo, expressed confidence in the selected leadership team, which also includes Mara Quintero Campbell as acting chief operating officer and chief of staff, Adam Russell as chief vision officer, Rob Reich as a senior advisor, and Mark Latonero as head of international engagement.

Critics, such as Emily Bender, warn that the focus on hypothetical AI doomsday scenarios distracts from current ethical concerns. Bender argues that the problem lies in what people do with technology rather than the technology itself.

Christiano has defended his position on AI risks through a post on LessWrong, distinguishing between "extinction risk" and "bad futures" and clarifying his views on the potential for AI to accelerate harmful changes. He has also advocated for responsible AI scaling policies and the importance of regulations to manage AI development risks.

Arstechnica Article

Current
Price

Current Block Height

Current Mempool Size

Current Difficulty

Subscribe