Google adopts rules for safer AI, including Asimov’s Three Laws of Robotics

Tech giant companies like Google and Microsoft are entering into the Artificial Intelligence to replace human resources with highly accurate and powerful, feelingless robots. Recently,  Stanford and Berkeley, scientists on Google OpenAI had shared a post on Google blog which describes and deals with the issues and criteria which should be obeyed while making AI powered Robots.

The paper includes the five strong points of safety measurements, and three of them are Asimov’s three laws of Robotics. These three laws are hypothetically assumed by Sci-Fi Novel writer Isaac Asimov. According to the Olah, These are all forward thinking, long-term research questions — minor issues today, but important to address for future systems

The five safety measured points put forward by the Google’s Scientists are :

  • Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
  • Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
  • Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
  • Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
  • Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.

As from the Google’s words, We believe in rigorous, open, cross-institution work on how to build machine learning systems that work as intended. We’re eager to continue our collaborations with other research groups to make positive progress on AI. .Recently, Microsoft has terminated their AI Bot Tay, because it threatened the humans. So, let’s assume that the companies are taking better steps to make AI, for helping man, not to destroy.

Image Credit : Chappie Movie

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.