Instead of machines pursuing goals of their own, the new thinking goes, they should seek to satisfy human preferences; their only goal should be to learn more about what our preferences are. Russell contends that uncertainty about our preferences and the need to look to us for guidance will keep AI systems safe. In his recent book, Human Compatible
Russell’s ideas are “making their way into the minds of the AI community,” said Yoshua Bengio, the scientific director of Mila, a top AI research institute in Montreal. He said Russell’s approach, where AI systems aim to reduce their own uncertainty about human preferences, can be achieved with deep learning — the powerful method behind the recent revolution in artificial intelligence, where the system sifts data through layers of an artificial neural network to find its patterns. “Of course more research work is needed to make that a reality,” he said.
The original article can be found here.
AIWS Innovation Network (AIWS-IN) is a platform for distinguished thinkers, innovators practice concepts, principles of the AIWS Social Contract 2020 for a peace, security, and prosperity. This issue can become an initiative in the AIWS-IN.