A robot was able to reach its destination without GPS

Desert ants are uncommon single guides. Specialists were intrigued by these ants as they planned AntBot, a principal strolling robot that can investigate its condition haphazardly and return home naturally, without GPS or mapping.

The optical compass created by the researchers is touchy to the sky’s energized bright radiation. Utilizing this “heavenly compass,” AntBot measures its heading with 0.4° accuracy by clear or overcast climate. The route’s exactness accomplished with moderate sensors demonstrates that bio-motivated mechanical autonomy has a colossal limit in that advancement.

Humans and robots are showing an increasingly close relationship in the modern world, as well as the ability to “help” people of robots. So how does one use machines and AI for a good, ensuring safety for the community? Is there any moral standard for developing and using robots? These are the studies being developed in Layer 2 of the 7-layer AIWS Model by the Michael Dukakis Institute for Leadership and Innovation (MDI).

How can we ensure that AI benefits society as a whole?

Ángel Gurría, the secretary-general of the Organization for Economic Cooperation and Development (OECD)

In an interview with CNN’s Becky Anderson at the World Government Summit (WGS) in Dubai, Gurría stated: “The danger is not just about knowing the technology that is growing at breakneck speed, but how you empower half of the workforce that will be displaced.”

Gurría concentrated on the requirement for nations to understand the consequences for work powers and on ages yet to enter the universe of work.

He additionally asked national leaders to “broaden their horizons and make appropriate decisions in order to create a better future.”

Under Ángel Gurría’s leadership, OECD is leading the effort to reform the international tax system and to improve governance frameworks in anti-corruption and other fields. OECD is addressing issues surrounding the development of Artificial Intelligence based on two fundamental questions:

What sort of policy and institutional frameworks should guide AI design and use?

How can we ensure that AI benefits society as a whole?

Secretary General Ángel Gurría as the inaugural recipient of the World Leader in AI World Society on April 25, 2018 honored by the Boston Global Forum (BGF) and Michael Dukakis Institute at BGF-G7 Summit Conference 2018 at Harvard University Faculty Club.

Age of Artificial Intelligence

Shaping Futures introduce the writings of Prof. George Church, Harvard Medical School:

A Bill of Rights for the Age of Artificial Intelligence

He wrote that “We should be concerned about the rights of all sentients as an unprecedented diversity of minds emerges.”

And his idea:

What prevents extension to other animals, organoids, machines, and hybrids? As we (e.g., Hawking, Musk, Tallinn, Wilczek, Tegmark) have promoted bans on “autonomous weapons,” we have demonized one type of “dumb” machine, while other machines — for instance, those composed of many Homo sapiens voting — can be more lethal and more misguided.

In the 7 -layer model of AI World Society (AIWS), the 4th layer is vital.

At AI World Society – G7 Summit Conference at Loeb House, Harvard University, Mr. Paul Nemitz, the Principal Advisor in the Directorate General for Justice and Consumers., European Commission, member of AIWS Standards and Practice Committee, will present the concepts of AI World Society Law, which will shape futures of the world with deeply applied AI.

Robot in the Era of Journalism

Machinery is increasingly used in the newspaper industry, which directly affects the work of reporters and editors. Let’s discuss with Shaping Futures about this and assess how it will impact our future?

About a third of the content published in Bloomberg News is automated technology. The company’s Cyborg system can assist reporters to produce thousands of articles and report the results of companies in each quarter.

No tired, accurate, and not complaining, Cyborg helped Bloomberg in the race with Reuters, a direct competitor in the field of business financial journalism.

Apart from news about business results, Cyborg also helped Bloomberg give some news about sports tournaments or earthquakes …

More and more newspapers use artificial intelligence to serve their work. Last week, the Australian version of The Guardian published the first article supported by robots; and Forbes recently announced that they are testing a tool called Bertie to provide reporters with a rough draft of the articles.

The use of artificial intelligence is becoming part of the journalism industry, but this is not a threat to reporters. Instead, this idea allows journalists to spend more time on real work.

“The work of journalism is creative, it’s about curiosity, it’s about storytelling, it’s about digging and holding governments accountable, it’s critical thinking, it’s judgment — and that is where we want our journalists spending their energy,” said Lisa Gibbs, the director of news partnerships for The A.P.

A.P., The Post and Bloomberg also set up internal alerts to signal anomalous data bits. Reporters who see these warnings can write bigger stories.

For example in the Olympics, The Post has set up Slack alerts, the workplace messaging system to notify editors if the results are more or less than 10% of the world record.

But machine-generated stories are not infallible. For an earnings report article, for instance, software systems may meet their match in companies that cleverly choose figures in an effort to garner a more favorable portrayal than the numbers warrant. At Bloomberg, reporters and editors try to prepare Cyborg so that it will not be spun by such tactics.

AI becomes a productivity tool in reading reports and finding clues. When performing data analysis, AI can help detect abnormal factors.

A few years ago, AI was only used in high-tech companies, but now it really has become an essential need; said Francesco Marconi, the head of research and development at The Journal. “I think a lot of the tools in journalism will soon be powered by artificial intelligence.”

Mr. Marconi of The Journal agreed, likening the addition of A.I. in newsrooms to the introduction of the telephone. “It gives you more access, and you get more information quicker,” he said. “It’s a new field, but technology changes. Today it’s A.I., tomorrow its blockchain, and in 10 years it will be something else. What does not change is the journalistic standard.”

According to Marc Zionts, the chief executive of Automated Insights, machines need a long way to replace flesh-and-blood reporters and editors.

“If you are a non-learning, non-adaptive person — I don’t care what business you’re in — you will have a challenging career,” Mr. Zionts said.

In addition to giving reporters more time to pursue their interests, machine journalism comes with an added benefit for editors.

“One thing I’ve noticed,” said Mr. St. John, “is that our A.I.-written articles have zero typos.”

With the purpose of ensuring AI’s future, the Michael Dukakis Institute has launched the AIWS Initiative, including the AIWS 7-Layer Model for ethical AI and concepts for the design of AI-Government, which has received the support of Paul Nemitz.

AI ethics centers of technology giants

In Europe, nations like the UK and France have put ethics at the center of AI, while spreading out more grounded consistency rules for tech giants to hold fast to.

As AI and machine learning turn into the new business standard, tech giants and specialist organizations over the world are riding the rising tech wave.

As the world recognizes the certainty of AI and ML, the discussion has moved to its ethics, with governments and administrators bringing out stringent arrangements in regards to the relevance of the innovation. In Europe, nations like the UK and France have put ethics at the center of AI, while spreading out more grounded consistency rules for tech giants to hold fast to.

Observing the most recent improvements, tech giants like Google and Facebook, among numerous other companies, have drawn out their ethics strategies in regards to the sending of AI and ML inside their associations.

One-sided Algorithms drove Google to set up an Ethics strategy

Entangled in the debate, Google was one among the principal tech organization to draw out an AI arrangement featuring the ethics in AI. In a 2018 blog entry, Google CEO Sundar Pichai had reported the company’s 7 AI Principles. Among the key rules, Google stated that it would abstain from making or fortifying unjustifiable inclinations.

Since drawing out Google’s AI Principles, the organization has likewise directed numerous preparation and talk sessions advancing the moral estimation of AI. In 2017, Google’s DeepMind propelled its morals focus to investigate the organization’s AI achievement.

Broken facial acknowledgment transformed Microsoft into a functioning campaigner 

AI is a tricky incline and any organization who utilized the innovation without appropriate check and equalization has ended up in a bad position. Microsoft excessively arrived in the discussion when its AI framework related darker-cleaned individuals with more danger when contrasted with lighter skin.

In May 2018, Microsoft Chief, Satya Nadella declared that the organization would soon draw out its arrangement for building a hearty moral AI framework. Through Fairness, Accountability, Transparency, and Ethics in AI (FATE), Microsoft would like to take a shot at the intricate social ramifications of AI, machine learning, information science, vast scale experimentation, and expanding robotization.

Facebook’s million dollars moral AI push

As of late Facebook recently declared it $7.5 million financing for a new AI-morals based research focus, The Institute for Ethics in Artificial Intelligence, as a team with the Technical University of Munich (TUM).

With its Fairness Flow, Facebook has recently brought a dive into creating AI apparatuses with moral perspectives woven into it. Decency Flow is an instrument structured particularly to recognize inclination in machine learning calculations. It has additionally utilized its moral reach with different organizations like AI4People activity and Partnership for AI.

Amazon’s trust with one-sided enlisting apparatus

The internet business stalwart has been one of the essential trailblazers in upsetting web-based business division with its reception of the most recent innovations. In any case, the organization has entered beset waters with its rushed appropriation of AI advancements. In 2015, it pulled the attachment of its AI-based enrolling device for biases against ladies amid enlisting. Of late, it came into controversy when its facial acknowledgment device neglected to perceive individuals with darker skin.

In spite of the fact that it has not drawn out an elite AI-based morals focus, the organization broadly held hands with any semblance of Microsoft and IBM to be a piece of The Partnership on Artificial Intelligence to Benefit People and Society, which is a consortium of tech pioneers who guarantee that AI is produced to support humankind.

In general, the current development of AI is not transparent enough to earn trust from people. With rules and orders, what the AIWS is working on, or ethical frameworks, which Michael Dukakis is building with the AIWS Initiative, we can take a step closer to transparency and ethics in AI development.

Efforts of the White House in the battle for AI world domination

The White House is relied upon to make a move in the following weeks went for boosting U.S. AI and 5G organization, an organization official reported to The Hill.

The White House is relied upon to make a move in the following weeks went for boosting U.S. AI and 5G organization, an organization official reported to The Hill.

The arrangement will offer the “principal expectations” of the National Quantum Initiative Act.

Officials and specialists have since a long time ago raised worries that China could defeat the U.S. in the race to actualize AI and 5G.

“In the coming weeks, we could hope to see activity intended to safeguard American [research and development] authority in AI and 5G,” the administration official said.

The Wall Street Journal on Wednesday announced the White House plan is required to incorporate official requests from President Trump that will channel assets toward enhancing AI and 5G innovation.

Trump, during his State of the Union address on Tuesday night, noted that he supports putting resources into the “industries of the future”

“President Trump’s pledge to American authority in AI, 5G remote, quantum science, and propelled assembling will guarantee that these advancements serve to profit Americans and that the American technological community remains the jealousy of the world for ages to come,” Michael Kratsios, Deputy Assistant to the President for Technology Policy, said in the announcement.

The administration is relied upon to push for expanded spending on inquiring about and growing new advancements and utilizing governmental information to enhance AI, as per the Journal.

At a Senate Commerce Committee hearing on Wednesday, legislators raised concerns that China could win the “race to 5G,” and that Chinese telecom monsters could hack any U.S 5G innovation.

“We should be sure that there is a protected production network backing up our 5G framework,” Sen. Maria Cantwell (D-Wash.), one of the senators on the committee, said amid her introductory statements.” We can’t endure a cracked valve or indirect access into these systems.”

She, at that point, called on the Trump organization to give Congress “a genuine, quantifiable 5G risk assessment.”

MDI is working to promote the AIWS 7-Layer Model to build Next Generation Democracy. This invention will hopefully provide a service that helps the development of AI in order to protect achievements and reduce risks from the spread and actual harms that AI could create for humanity.

The miraculous game brings AI common sense

Researchers say AI can move from a model to common sense through a game.

As announced by MIT Technology Review, scientists at the Allen Institute for AI (AI2) trust that Pictionary, in which players attract a picture to pass on a word or expression, could cross over any barrier between the utilization of calculations, machine learning (ML), intellectual figuring, and what we know as “common sense.”

Artificial intelligence can process and perform errands characterized by its programming at incredible speed, however, this does not mean that it can make the associations among items and individuals or have the capacity to reason dependent on certifiable learning.

A precedent would ask AI the stick-and-carrot question: in the event that you stick a stick into a carrot, does the carrot have a gap or the stick?

The appropriate response is clear to us, however not so for the present AI models.

This absence of the presence of a mind is hampering various AI applications today, for example, chatbots and live voice aides, as they can’t decipher or comprehend inquiries past the most straightforward, stripped-down inquiries.

Nevertheless, the AI2 group believes that the preparation of AI through Pictionary may result in the blessed vessel, sound judgment prepared AI, turning into a reality.

So as to test the hypothesis, the scientists have made an online rendition of the diversion, named Iconary, which interfaces AI and human players, the two of which are entrusted with speculating the significance behind a picture.

It is trusted that as the AI learns, it will build up its own sort of good judgment by framing associations between various unique ideas.

Microsoft co-founder Paul Allen is the creator of the non-benefit lab and has recently put a further $125 million into the group’s activities.

As AI develops at a very fast pace, it is necessary to observe its progress from time to time to keep it under control. Developers and organizations should use a certain set of standards to keep track of technology’s development. The AIWS 7-layer model for AI ethical issues developed by MDI can be a good one to follow.

Boston Global Forum – Michael Dukakis Institute will be a Strategic Alliance Host of AI World Government Conference & Expo 2019

The World Government AI Conference & Exhibition provides a forum to discuss the tactical benefits that AI brings and strategies for technology deployment to public sector agencies and their supply chains.

The program will take place on 3 days, from June 24 to June 26, 2019, in Washington, D.C. The conference brings together business and technology leaders from across government, technological and research groups, and businesses.

Michael Dukakis Institute for Leadership and Innovation and Boston Global Forum will be a Strategic Alliance Host of The World Government AI Conference & Exhibition.

“We have several activities already underway at AI World 2018 and will continue our strategic alliance for all AI World 2019 events. We will present an AI-Government model at AI World Government,” said Nguyen Anh Tuan, CEO of BGF and Director of MDI.

The conference will provide a clear detailed plan of where and how to limit risk, enhanced successful deployment of many AI applications and meeting technology service providers.

In developing AI, the challenges and opportunities faced by governments and public sectors are slightly different from those of businesses. The AI World Government is a place for business leaders and technology to analyze successful cases of leveraging advanced intelligent technology to develop and improve government services. The solutions to important issues will be found for helping agencies grasp the actual situation in the implementation of AI-government.

China’s concern about an unintended war from arms race AI

Experts and politicians in China are worried that the rush to integrate artificial intelligence into military weapons and equipment may inadvertently lead to war between nations.

Specialists and government officials in China are stressed that a hurry to coordinate man-made reasoning into weapons and military gear could unintentionally prompt a war between countries.

According to a new report published by the US national security think tank Center for a New American Security (CNAS):  “recently, Chinese officials and government reports have begun to express concern in multiple diplomatic forums about arms race dynamics associated with AI and the need for international cooperation on new norms and potentially arms control.”

Such concerns stretch out to China’s private division. Jack Mama, the administrator of Alibaba, said expressly in a discourse at the 2019 Davos World Financial Gathering that he was worried that worldwide challenge over artificial intelligence could prompt war.

In spite of communicating concern on artificial intelligence arms races, a vast majority of China’s initiative sees expanded military use of simulated intelligence as unavoidable and is forcefully seeking it.

China’s conduct of forcefully creating, using, and sending out progressively self-ruling mechanical weapons and observation man-made intelligence innovation runs counter to China’s expressed objectives of staying away from an AI arms race.

China’s accomplishment in business simulated intelligence and semiconductor markets have direct importance to China’s geopolitical power just as its military and secret activities man-made intelligence abilities. China’s business advertises achievement as having direct significance to China’s national security, both on the grounds that it decreases the capacity of the Unified States government to put a strategic and financial weight on China and in light of the fact that it expands the mechanical abilities accessible to China’s military and knowledge network. As to last mentioned, all significant innovation firms in China coordinate broadly with China’s military and state security benefits and are required to do as such by law.

China’s achievement in business man-made intelligence and semiconductor markets brings subsidizing, ability, and economies of scale that both diminish China’s helplessness from losing access to worldwide markets and offer valuable innovation for the improvement of weaponry and reconnaissance capacities.

Although there are concerns about the risk of AI being developed for the wrong purpose, no one can deny the benefits it offers, even nationally. However, every country still needs to abide by the moral and legal codes when developing in this area, and the world also requires international policies, conventions, and regulations to ensure unity and global consensus in developing AI. Calling leaders of nations to build a treaty on the exploitation and development of AI for peace is what the Michael Dukakis Institute (MDI) is actively implementing through Layer 5 of the 7-layer AIWS Model.