Commitment to economic and security cooperation between Abe and Merkel

Japanese Prime Minister Shinzo Abe and German Chancellor Angela Merkel exchanged views on promoting bilateral security and economic cooperation in the recent meeting.

In a press conference held after talks with Abe, Merkel made clear her support for realizing a “free and open Indo-Pacific”. According to her, the scheme also concerns the territorial ambitions of China. Japan also strengthens its relationship with Germany to share awareness of China.

In past years, Germany is thought to have valued relations with China. This time, the German Chancellor only visited Japan. This shows that the country is maintaining a suitable distance with China.

On the other hand, Abe and Merkel confirmed that the two countries would strengthen a joint study on artificial intelligence (AI) and self-driving cars. Both Japan and Germany are concerned about China increasing its ability to dominate the market in the IT field.

Abe and Merkel attended a meeting of business leaders from both countries after the summit talks on Monday. In the era of 5G – the next generation mobile network technology, Merkel raises questions of preventing the Chinese administration from collecting and exploiting huge amounts of data.

In 2015, Boston Global Forum has named Germany’s Chancellor Angela Merkel as the recipients of the World Leader for Peace, Security and Development Award. Japan’s Prime Minister Shinzo Abe was named as a recipient of The World Leader in Cybersecurity Award, which is granted to an individual whose outstanding contributions have led to the advancement of cybersecurity by Boston Global Forum, in the same year.

Artificial Intelligence — gift or curse?

Dissimilar to most incredible mechanical advances, the coming of Artificial Intelligence (AI) offers extraordinary admonitions.

One of the world’s leading physicists, Stephen Hawking, has dimly seen that AI can undoubtedly conquer individuals and turn into another type of life. During a meeting with the magazine Wired, he said that he expected that the AI ​​could totally supplant individuals. On the off chance that individuals structure PC infections, somebody will plan AI to enhance and increase.

The father of present-day processing, Alan Turing, has stated that if people can’t recognize responses from machines and individuals, the machine can do so. The start of the AI ​​began in the mid-1950s at Dartmouth College and it astonished the world by taking care of straightforward variable-based math issues and coherent hypotheses.

Decades later, the purported client master framework has shown up. These scaling contentions have duplicated in tackling complex issues. The advances were, for the most part, by Moore Law.

In 1997, Deep Blue, IBM’s chess PC, Deep Blue, defeated the dominant winner Gary Kasparov. A suitable model is the Deep Neural Network, a subcategory of the fake neural system, which decides the right numerical task to change over one contribution to yield.

Despite the fact that these advances are valuable in everything from prescription to automation, there is an unavoidable danger to AI that has as of late turned out to be clear, which are moral concerns must be checked.

The idea of the factor of artificial morality was presented by Wendell Wallach in his book “Moral Machines”, in which he asked whether programming planners ought to be constrained to creating programs that harm their countenances, moral or not.

Is there a reasonable limit between human mindfulness and feeling, (for example, sympathy) and their exact proliferation in a machine? Joseph Weizenbaum, one of AI’s fathers and MIT educator, was persuaded that the AI ​​would never have the capacity to duplicate human attributes such as compassion or analysis.

In a turning point, Hans Moravec, a robot creator, and his associates anticipated that blending people and machines into cyborg would create a more intelligent “animal groups” – and progressively both can murder individuals.

Another area of ​​concern is the effect of AI on work. In any case, there are the individuals who have discovered that mechanization frequently has a net increment in work because of the flighty downstream microcosm and macroeconomic productivity.

Unmistakably in the moral and specialized an unexplored area, in which alert and regard for the potential for unintended outcomes are vital. We can dare to dream – and supplicate – that AI will assume a functioning job in our lives, just as of who and what is to come.

Whether it is for better or worse, the future of AI will mainly lie on the hand of its developers – we, human to decide. Hence, a certain set of guidelines on ethics and standards is needed for developers to follow, not just what to make but also how to make it ethical. This is exactly what Michael Dukakis Institute for Leadership and Innovation (MDI) is attempting to do. So far, the organization has been working on developing the AIWS Index for governments and enterprises.

China and Russia could disturb basic national foundations in the US

Both China and Russia are equipped for propelling cyberattacks that could cut down power systems or medical clinics, as indicated by the most recent annual US Worldwide Threat Assessment.

The 42-page report, assembled by the Senate Select Committee on Intelligence, refers to cyber attacks, online falsehood, and race impedance, as the best security concerns confronting the US today. It distinguishes China and Russia as the greatest wellsprings of potential assaults on US framework (such as the case of petroleum gas pipelines), with the capacity to cause interruption for quite a long time and even weeks. The audit said Russia could complete digital undercover work and dispatch impact battles like those led during the 2016 US presidential race, and that is “ending up increasingly skilled at utilizing web-based life to modify how we think, act, and choose.”

Unsurprisingly, the report recognizes China as America’s most dynamic rival with regards to digital secret activities. It believes Chinese IT firms are being utilized to keep an eye on the US, an official view that clarifies the ongoing treatment of Huawei by the US and certain partners.

The report says the innovation-related dangers it distinguishes will just increment as individuals coordinate billions of new computerized gadgets into their own lives and working environments. It cautions that the US’s general lead in science and innovation will keep on contracting. Also, it refers to a few rising advancements that could empower new dangers: AI, biotechnology, and 5G systems.

AI should be used for causes that will serve humanity. Toward this aim, MDI has built the AIWS Initiative to establish a society with the best and most effective AI application, bringing the best to humans.

A New Algorithm Trains AI to “de-biases”

Lately, AI systems are known that they may be unfair because their programs have identical prejudiced views widespread in society. An algorithm developed by engineers from MIT CSALL may eliminate the bias from AI.

 

The algorithm can identify and minimize any hidden biases by learning to understand a specific task such as the basic structure of the training data or face recognition. Following testing, the algorithm was able to reduce ‘categorical bias’ by over 60%, while performance remained stable.

Unlike other approaches that require human input for specific biases, the MIT team’s algorithm is able to test datasets, identify any biases and automatically retrieve template again without needing a programmer in the loop.

“Facial classification in particular is a technology that’s often seen as ‘solved,’ even as it’s become clear that the datasets being used often aren’t properly vetted,” said Alexander Amini, a co-author of the study.

“Rectifying these issues is especially important as we start to see these kinds of algorithms being used in security, law enforcement and other domains.”

According to Amini, the de-bias algorithm would be particularly relevant for very large datasets which would be too expansive to be vetted by humans.

Giving AI the capability of reasoning and adaptability will be a breakthrough for the industry. but it also means AI will be given a significant control of itself and its actions, which without thorough consideration, might lead to unforeseen consequences. This problem requires monitoring and regulations on AI. Currently, professors and researchers at MDI are working on building an ethical framework for AI to guarantee the safety of AI deployment.

America unseals charges against Huawei

The U.S. Justice Department unsealed two indictments against China’s Huawei Technologies Co Ltd.

One indictment involves a case of allegedly stealing trade secrets from T-Mobile, an American wireless carrier, by the company on behalf of a U.S. Huawei subsidiary.

In a civil lawsuit in 2017, a Huawei employee was found to have swiped one of the arms of a robotic phone-testing device named “Tappy”, owned by T-Mobile.

Seattle jury asked Huawei to pay compensation of $4.8m to T-Mobile. The court has discovered, however, “neither damage, unjust enrichment nor willful and malicious conduct by Huawei”.

Huawei was also accused of engaging in deceitful financial practices through the operations of a subsidiary with four big banks (HSBC is one of those) that violated international sanctions on Iran.

Meng Wanzhou, the chief financial officer of China tech giant Huawei, was arrested by Canadian police, on December 1st, on behalf of the American authorities. The U.S formally requests Meng Wanzhou’s extradition. The Canadian Department of Justice will have 30 days to decide whether to commence the extradition process.

In the face of the rapid development of technology in general and artificial intelligence (AI) in particular, the risk of developing beyond the current framework is entirely possible. The Michael Dukakis Institute (MDI) has been developing the Artificial Intelligence World Society (AIWS) with the 7-layer AIWS Model; at the same time, MDI and AIWS established the AIWS Standards and Practice Committee with the goal of developing standards for an artificial intelligence citizen.

A correlation between two varying tech administration styles: Elon Musk and Tim Cook

Basically, there are two types of business leader: A progressively enthusiastic and imaginative leader, and an increasingly consistent leader.

In 2018, Elon Musk was put under serious scrutiny. He expected to make a record number of Model 3 vehicles. To do this, he pulled a very late move to manufacture another creation line in a tent close to his huge processing plant. Nonetheless, he needs to send his vehicle to various areas. This included China, in additional to the United States, Europe, and numerous different locations. To a point where Tesla is all around abused and is producing stable benefits each quarter a bit.

Basically, there are two types of business leader: A progressively enthusiastic and imaginative leader, and an increasingly consistent/level leader.

The first type is one with an innovative vision. These pioneers can make thoughts for their plan of action quicker than the organization can execute them, such as Elon Musk, who gives extraordinary thoughts and incredible identities. Elon regularly takes a shot at the following enormous things and offers special thoughts.

The second is CEOs who can make an organization go about as a productive machine, such as Tim Cook. Tim Cook focuses on administrations and the business area.

Be that as it may, both were gotten up to speed in ashes in light of their particular shortcomings. Elon Musk has appeared numerous controversies throughout 2018: being irritated at the SEC, brutal and eccentric cutbacks, and smoking weed on Joe Rogan’s podcast. He has done numerous things that Tim Cook will never do. Tim Cook, then again, was intensely affected by his absence of inventiveness. The iPhone is never progressive, and Tim Cook is now fixing the iPhone with each dollar it has worth and emptying cash into Research and Development (RnD).

An individual like Tim Cook will be exceptionally useful for anyone like Elon. Cook is a lord in making a decent organization, which implies the operational part of an organization will work extraordinarily whenever kept running by Cook. What’s more, an individual like Elon will be useful for somebody like Tim Cook. At the point when Apple required development, Elon demonstrated a record of making colossal activity thoughts that fit what the organization was doing. He will almost certainly offer a dream for Apple whenever enlivened to do as such.

Obviously, there are individuals who appear to fit every one of the generalizations. Does Jeff Bezos case fall into the condition? He appears to be very innovative and furthermore, has essential business standards. That may be the reason he left Wall Street to begin Amazon since he had an innovative thought and would not like to work in the monetary segment. He is additionally renowned for sending his email to the staff. He additionally does not permit introductions of Powerpoint or Slide at meetings since they are exhausting. He has a considerable amount of innovative beliefs and forms and has a significant enthusiastic identity. He likewise runs Amazon great, however, that might be on the grounds that he has a bolster, and furthermore has a business foundation.

So any reasonable person would agree that you can take a look at an official or business pioneer and see where they are on the Creative range – Business Operations.

The best case for a business is the place you have an individual like Tim Cook and Steve Jobs in the indistinguishable organization from Apple. That is the reason Apple is so fruitful (in addition to Johnny Ives/Woz), and right now they have such a lot of money tucked neatly away, which they will spend on RnD. This is on the grounds that they have an imaginative pioneer and a functioning master that they have been effective with.

In times like this, AI development needs adjustment so it won’t get out of hand; rules and principles need to be followed. This is the focus of BGF- MDI’s development of the AIWS initiative to come up with a set of moral values and norms for AI so it can become more transparent.

President of the Republic of Finland Sauli Niinistö met Prime Minister of Sweden Stefan Löfven

On Jan 28, 2019 President of the Republic of Finland Sauli Niinistö met Prime Minister of Sweden Stefan Löfven in Mäntyniemi.

During the meeting, The President and the Prime Minister discussed the cooperation between Sweden and Finland, European defense cooperation, and issues related to the Arctic region based on cooperation and friendship between the two sides.

On December, 2018 President Sauli Niinistö – President of the Republic of Finland received the World Leader for Peace and Cybersecurity Award from the Boston Global Forum (BGF) and the Michael Dukakis Institute for Leadership and Innovation (MDI) for his leadership role in establishing Finland as a vital member of the world community and his support of The European Centre of Excellence for Countering Hybrid Threats.

In this role, he has fostered the understanding of, and solutions to the numerous threats we face from political forces, economic instability, military intervention, civil unrest, climate change, and unsafe Internet practices.

President Niinistö expressed his honor to receive this award and emphasized that countries should make every effort to increase citizens’ awareness in all areas of society and strengthen cooperation among nations, as well as rules for international borders.

AI can translate human’s thinking into the voice

By following somebody’s cerebrum movement, this innovation can reproduce the words that an individual hears with extraordinary lucidity. This leap forward tackles the intensity of discourse amalgamation and man-made consciousness. It additionally establishes the framework for helping the individuals who can not address recapture the capacity to speak with the outside world.

This research study is led by Prof. Nima Mesgarani, a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute.

This research is to combine the recent advances in deep learning (deep neural network) with the latest innovations in speech synthesis technology to reconstruct intelligence speech from human auditory cortex. This approach has been demonstrated with positive results for the next generation of speech Brain-Computer Interface (BCI) system, which can not only restore communications for paralyzed patients but also have the potential to transform human-computer interaction technologies.

Future breakthroughs that the technology could lead to include a wearable brain-computer interface that could translate an individual’s thoughts, such as ‘I need a glass of water’, directly into synthesized speech or text. “This would be a game changer,” said Prof Mesgarani. “It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”

In the primary logical investigation, Columbia neuroscientists made a framework that makes an interpretation of straightforward, conspicuous words into words. By following somebody’s cerebrum movement, innovation can reproduce the words that an individual hears with extraordinary lucidity. This leap forward tackles the intensity of discourse amalgamation and man-made consciousness. It additionally establishes the framework for helping the individuals who cannot address recapture the capacity to speak with the outside world.

These discoveries have been distributed in logical reports. “Our voice associates us with companions, family and our general surroundings,” said Nima Mesgarani, Ph.D., the lead writer of the article and a key examiner of the article at Columbia University’s Mortimer B. Zuckerman Institute for Brain Behavior Research. “With the present research, we have a potential method to reestablish that capacity. We have appeared, with the correct innovation, these individuals’ musings can be deciphered and comprehended by anybody audience members. ”

Many years of research have demonstrated that when individuals talk – or even envision talking – narrating action models show up in their cerebrums. Specialists, attempt to record and interpret these models with the goal that they can be converted into words.

Be that as it may, finishing this accomplishment demonstrated testing. The underlying endeavors to unravel mind signals neglected to make anything like straightforward words. Dr. Mesgarani’s gathering has changed to a profession, a PC calculation that can integrate discourse in the wake of being prepared on the accounts of the speakers.

“This is a similar innovation utilized by Amazon Echo and Apple Siri to give verbal solutions to our inquiries,” said Dr. Mesgarani, additionally an associate teacher of an electrical building at the School of Engineering and Science, getting familiar with Columbia’s Fu Foundation Application.

To instruct individuals to articulate the cerebrum’s action, Dr. Mesgarani teamed up with Ashesh Dinesh Mehta, MD, Dr., a neurosurgeon at the Northwell Medical Neurology Institute.

“Working with Dr. Mehta, we have requested that epilepsy patients experience mind medical procedure to tune in to the idioms of various individuals, while we measure the examples of cerebrum action”, Dr. Mesgarani said. “Apprehensive models have prepared the elocution.”

Next, the specialists requested that comparative patients tune in to the speaker perusing the digits somewhere in the range of 0 and 9 while recording cerebrum flags that could be gone through the articulation. The final product is a voice that sounds like a robot perusing a series of numbers. To check the precision of the chronicle, Dr. Mesgarani and his group have allocated people to tune in to the accounts and report what they hear.

“We found that individuals can comprehend and rehash sounds about 75% of the time, higher and go past every single past exertion,” Dr. Mesgarani said. “Delicate elocution set and incredible neural system speak to the sound that patients at first heard with astounding exactness.”

Dr. Mesgarani and his group intend to test the following increasingly complex words and sentences, and they need to run comparable tests on cerebrum signals transmitted when one talks or envisions talking.

“In this situation, if the wearer supposes ‘I require a glass of water’, our framework can take the cerebrum flag produced by that idea and transform them into blended words, verbally,” Dr. Mesgarani said. “This will be a distinct advantage. It will enable any individual who can not talk, regardless of whether through damage or ailment, the chance to enhance to interface with their general surroundings.”

What we expect came true finally. This success will especially make the AIWS ideas come true. People will be more honest; unable to say something different from what they are thinking, and cannot be false. The cultural values of AIWS to build a faithful and honest society now have the opportunity to implement, to make AIWS a reality.

According to Mr. Nguyen Anh Tuan – Director of The Michael Dukakis Institute for Leadership and Innovation, Co-Founder, and Chief Executive Officer of The Boston Global Forum: “What we expect came true finally”.

“This success will especially make the AIWS ideas come true. People will be more honest, unable to say something different from what they are thinking, not be false. The cultural values of AIWS to build a faithful and honest society now have the opportunity to implement, to make AIWS a reality,” said Mr. Nguyen Anh Tuan.

Professor Jason Furman: “Technology can give us new choices and new opportunities so that sense can make everything better, but only if we are the user.”

On January 17, 2019 Professor Jason Furman – Chairman of the Council of Economic Advisers and chief economist of former POTUS Obama, and professor at Harvard University’s Kennedy School of Government – gave a talk about AI issues in AIWS Roundtable, which was held by Vietnam National Television (VTV) and Boston Global Forum (BGF).

On January 17, 2019 Professor Jason Furman – Chairman of the Council of Economic Advisers and chief economist of former POTUS Obama, and professor at Harvard University’s Kennedy School of Government – gave a talk about AI issues in AIWS Roundtable, which was held by Vietnam National Television (VTV) and Boston Global Forum (BGF).

Prof. Furman thinks the most important thing to understand is that technology is not a destiny. “It doesn’t tell us what is going to happen to jobs,  it doesn’t tell us what is going to happen to wages,  what is going to happen to our economy and our society,” said Furman.

“Technology can give us new choices and new opportunities so that sense can make everything better, but only if we are the user,” Furman noted.

“Literally, in Switzerland have very similar technology, there’s a lot of people who are unemployed, literally, Switzerland has a very high level of employment. There’s nothing to do with the terminators. There’s nothing to do with the killer robots. Everything to do with are the economic policy, institutions and the culture…”

About unemployment question, he gave an example: “we have had technology replace humanity for a long time, we can work for 4 hours/week and earn the same amount that we earn in 1900 working 50 hours/week. We want to be richer!”

According to Prof. Furman, there are new types of jobs, there are more demand for all types of jobs and our jobs have changed. Technology will replace certain tasks, but not entire jobs. That is likely what is going to happen in the future.

“But the question is: are you going to prepare people for those jobs, give them the skills and the training? …Robots got a lot of quality. They got a lot of people being disappointed and left down. The way to come about that is education, training, having a system that helps place people in jobs. But the most important in all of this is the more innovation we have, the better sets of options and choices we have as long as we are willing to do what we need to take advantage of,” he explained.

Furman said one of the advantages that machine have is that some of these problems may actually be solvable on the machine side than the human side, but only we put an effort into it.

“The reason we should worry about is not that they are worse than people, but we might be able to solve their problems more easily than we solve the problems of people.”