Skip to content

Meta-Guide.com

Menu
  • Home
  • About
  • Directory
  • Videography
  • Pages
  • Index
  • Random
Menu

Are there any agencies that keep a check on artificial intelligence, or at least monitor user-safety affected by over-reliance on the technology?

Posted on 2017/01/162017/01/17 by mendicott

Are there any agencies that keep a check on artificial intelligence, or at least monitor user-safety affected by over-reliance on the technology?

> As Concern Grows, Another Philanthropy-Backed AI Watchdog Launches (Jan 2017)

[ Ethics and Governance of Artificial Intelligence ] is only the latest in a string of high-profile donations to the topic. Just to keep all these straight, here’s a rundown of recent AI watchdog and public interest initiatives, backed at least in part by philanthropy:

  • Future of Life Institute – While also concerned with biotechnology, nuclear weapons, and climate change, this institute’s current focus is artificial intelligence. FLI boasts highly respected researchers, and has been making research grants and building a critical mass of interest around AI risk, releasing some well-publicized open letters. It notably received $10 million from Elon Musk in 2015.
  • Center for Human-Compatible Artificial Intelligence – Led by UC Berkeley’s Stuart Russell, a prominent AI researcher and vocal advocate for responsible AI, this center launched in 2016 pooling efforts of researchers from Berkeley, Cornell and University of Michigan. Backing includes $5.5 million from the Open Philanthropy Project, and funds from the Leverhulme Trust and the Future of Life Institute.
  • The Leverhulme Centre for the Future of Intelligence – This one also opened in 2016 at Cambridge University, where outspoken AI skeptic Stephen Hawking is based. It’s funded with $12 million from the Leverhulme Trust, a British research funder. The Centre draws upon talent at top UK schools, plus UC Berkeley. It’s investigating nine initial projects, such as autonomous weapons and AI policymaking.
  • K&L Gates Endowment for Ethics and Computational Technologies – An endowment launched late in 2016 at Carnegie Mellon University, a top robotics school that made news when Uber recruited away a large number of its faculty. The endowment is backed by a $10 million gift from the law firm of the same name, and will support faculty, fellowships, scholarships, and a conference.
  • Partnership on AI – Also emerging last year, this effort comes entirely from industry—in fact, the players who stand to profit most from AI’s rapid advancement—Facebook, Google (DeepMind), Microsoft, IBM, and Amazon.
  • There’s also Open AI, although this one is a huge nonprofit research company seeking to advance AI, but in a transparent and distributed way, backed by Musk, Hoffman, and others.

Addendum:

  • Artificial Intelligence: EU To Debate Robots’ Legal Rights After Committee Calls For Mandatory AI ‘Kill Switches’ (Jan 2017)

See also my Quora answer to:

  • Who is working on programming laws into robots? (Jan 2015)
  • Meta Superintelligence Labs Marks Meta’s Strategic Consolidation of Its AI Efforts
  • How Ordinary People Fund, Enable, and Legitimize the AI War Machine
  • Edward Snowden Exposed Facebook and Meta as Engines of Corporate Surveillance
  • Meta Trains LLaMA Models Using Public Facebook and Instagram Data
  • Meta Researchers Complicit in the Development of AI Weapon Systems

Popular Content

New Content

 

Contents of this website may not be reproduced without prior written permission.

Copyright © 2011-2025 Marcus L Endicott

©2025 Meta-Guide.com | Design: Newspaperly WordPress Theme