IBM CEO Ginni Rometty: AI will change 100 percent of jobs over the next decade

IBM’s Chair, CEO and President Ginni Rometty has a powerful message for workers and employers in all strata of society: The Fourth Industrial Revolution is underway and it is shaping up to be one of the most significant challenges and opportunities of our lifetime. We are already seeing jobs, policies, industries and entire economies shifting as our digital and physical worlds merge.

Image: Ginni Rometty|Cindy Ord | CNBC

According to the World Economic Forum, the value of digital transformations in the Fourth Industrial Revolution is estimated at $100 trillion in the next 10 years alone, across all sectors, industries and geographies.

“As a result, we face an imminent and profound transformation of the workforce over the next five to 10 years as analytics and artificial intelligence change job roles at companies in all industries,” Rometty said while giving a keynote address at the CNBC’s At Work Talent & HR: Building the Workforce of the Future Conference in New York on Tuesday, April 2. In February, the executive was appointed to Trump’s American Workforce Policy Advisory Board along with 24 other leaders.

While only a minority of jobs will disappear, the majority of roles that remain will require people to work with the aid of analytics and some form of AI and this will require skills training on a large scale, Rometty said.

“I expect AI to change 100 percent of jobs within the next five to 10 years,” the IBM CEO said.

Rometty’s call to action comes at a time when the AI skills gap and the future of work exhibit a growing sense of urgency. The technology sector accounts for 10 percent of U.S. GDP and is the fastest part of the American economy but there are not enough skilled workers to fill the 500,000 open high-tech jobs in the U.S., according to the Consumer Technology Association’s Future of Work survey. Yet the tech industry is concerned that school systems and universities have not moved fast enough to adjust their curriculum to delve more into data science and machine learning. As a result, companies will struggle to fill jobs in software development, data analytics and engineering.

“To get ready for this paradigm shift companies have to focus on three things: retraining, hiring workers that don’t necessarily have a four-year college degree and rethinking how their pool of recruits may fit new job roles,” Rometty said.

To address the issue IBM is investing $1 billion in initiatives like apprenticeships to train workers for what it calls “new collar” jobs – a phrase Rometty has coined for workers who have technology skills but not a four-year college degree. She noted the company is crafting 500 apprenticeships with the goal of making this “an inclusive era for employees.”

The “new collar” jobs could range from working at a call center to developing apps or becoming a cyber-analyst at IBM after going through a P-TECH(Pathways in Technology Early College High School) program, which takes six years starting with high school and an associate’s degree.

IBM is also helping to catalyze a national movement to close the skills gap. IBM and the Consumer Technology Association announced the launch of the CTA Apprenticeship Coalition, to create thousands of new apprenticeships in 20 states in January.

It provides frameworks for more than 15 different apprenticeship roles in fast-growing fields, including software engineering, data science and analytics, cybersecurity, mainframe system administration, creative design and program management. New apprenticeships will be modeled in large part on IBM’s successful apprenticeship program, which launched in 2017, is registered with the United States Department of Labor and has grown nearly twice as fast as expected.

The apprenticeships created by this Coalition provide pathways to tech jobs in all parts of the country — from Kansas to Minnesota to Louisiana — not just in traditional tech hubs on the coasts. Its goal is to widen the aperture when it comes to hiring by placing the focus on skills rather than specific degrees. From early-career professionals to mid-career transitions and everything in between, these apprenticeships represent a new pathway to success in 21st century careers, including the growing number of new collar roles where a traditional bachelor’s degree is not always required. They also offer an opportunity to build in-demand skills without taking on student debt.

Besides IBM, coalition members include Canon, Ford, Sprint, Toyota and Walmart.

In this tight job market, where the talent chase has become so intense, Rometty has some advice for employers at businesses of all sizes. It’s a shift in thinking she has adopted at IBM. “Bring consumerism into the HR model. Get rid of self service, and using AI and data analytics personalize ways to retrain, promote and engage employees. Also move away from centers of excellence to solution centers.”

As she sums it up: “In today’s world company’s need to be agile and realize their workforce is a strategic renewable asset.

Ready for 6G? How AI will shape the network of the future

With 5G networks rolling out around the world, engineers are turning their attention to the next incarnation.

The latest technology—the fifth generation of mobile standards, or 5G—is currently being deployed in select locations around the world. And that raises an obvious question. What factors will drive the development of the sixth generation of mobile technology? How will 6G differ from 5G, and what kinds of interactions and activity will it allow that won’t be possible with 5G?

Today, we get an answer of sorts, thanks to the work of Razvan-Andrei Stoica and Giuseppe Abreu at Jacobs University Bremen in Germany. These guys have mapped out the limitations of 5G and the factors they think will drive the development of 6G. Their conclusion is that artificial intelligence will be the main driver of mobile technology and that 6G will be the enabling force behind an entirely new generation of applications for machine intelligence.

First some background. By any criteria, 5G is a significant advance on the previous 4G standards. The first 5G networks already offer download speeds of up to 600 megabits per second and have the potential to get significantly faster. By contrast, 4G generally operates at up to 28 Mbits/s—and most mobile-phone users will have experienced that rate grinding to zero from time to time, for reasons that aren’t always clear.

5G is obviously better in this respect and could even replace many landline connections.

But the most significant benefits go beyond these headline figures. 5G base stations, for example, are designed to handle up to a million connections, versus the 4,000 that 4G base stations can cope with. That should make a difference to communication at major gatherings such as sporting events, demonstrations, and so on, and it could enable all kinds of applications for the internet of things.

Then there is latency—the time it takes for signals to travel across the network. 5G is designed to have a latency of just a single millisecond, compared with 50 milliseconds or more on 4G. Any gamer will tell you how important that is, because it makes the remote control of gaming characters more responsive. But various telecoms operators have demonstrated how the same advantage makes it possible to control drones more accurately, and even to perform telesurgery using a mobile connection.

All this should be possible with lower power requirements to boot, and current claims suggest that 5G devices should have 10 times the battery lives of 4G devices.

So how can 6G better that? 6G will, of course, offer even faster download speeds—the current thinking is that they could approach 1 terabit per second.

But what kind of transformative improvements could it offer? The answer, according to Stoica and Abreu, is that it will enable rapidly changing collaborations on vast scales between intelligent agents solving intricate challenges on the fly and negotiating solutions to complex problems.

Take the problem of coordinating self-driving vehicles through a major city. That’s a significant challenge, given that some 2.7 million vehicles enter a city like New York every day.

The self-driving vehicles of the future will need to be aware of their location, their environment and how it is changing, and other road users such as cyclists, pedestrians, and other self-driving vehicles. They will need to negotiate passage through junctions and optimize their route in a way that minimizes journey times.

That’s a significant computational challenge. It will require cars to rapidly create on-the-fly networks, for example, as they approach a specific junction—and then abandon them almost instantly. At the same time, they will be part of broader networks calculating routes and journey times and so on. “Interactions will therefore be necessary in vast amounts, to solve large distributed problems where massive connectivity, large data volumes and ultra low-latency beyond those to be offered by 5G networks will be essential,” say Stoica and Abreu.

Of course, this is just one example of the kind of collaboration that 6G will make possible.  Stoica and Abreu envision a wide range of other distributed challenges that become tractable with this kind of approach.

These will be based on the real-time generation and collaborative processing of large amounts of data. One obvious application is in network optimization, but others include financial-market monitoring and planning, health-care optimization, and “nowcasting”—that is, the ability to predict and react to events as they happen—on a previously unimaginable scale.

Artificially intelligent agents are clearly destined to play an important role in our future. “To harness the true power of such agents, collaborative AI is the key,” say Stoica and Abreu. “And by nature of the mobile society of the 21st century, it is clear that this collaboration can only be achieved via wireless communications.”

That’s an interesting vision of the future. There is much negotiating and horse-trading to be done before a set of 6G standards can even be outlined, let alone finalized. But if Stoica and Abreu and correct, artificial intelligence will be the driving force that shapes the communications networks of the future.

Can the use of AI weapons be banned?

Many countries are now competing to utilize AI, or artificial intelligence in the military sphere. But that may lead to a nightmare– a world where AI-powered weapons kill people based on their own judgement, and without any human intervention.

What are full autonomous lethal weapons?

Full autonomous lethal weapons powered by AI are now becoming the biggest issue as they are becoming an actual possibility with rapid advancement in technology.

It’s different from Armed UAVs(unmanned aerial vehicles) already deployed in actual warfare… the UAVs are remotely controlled by human, who make final decisions about where and when to attack.

On the other hand, autonomous AI weapons would be able to make decisions without human intervention.

It is estimated that at least 10 countries are developing AI-equipped weapons. The United States, China, and Russia, in particular, are engaging in fierce competition. They believe that AI will be key in determining which country will be better positioned than others. Concerns are growing that the competition could lead to a new phase of the arms race.

Ban Lethal Autonomous Weapons is an NGO trying to highlit the danger of the weapons.

A non-governmental organization calling for a ban on such weapons produced a video to demonstrate how dangerous these AI weapons could be.

The video shows a palm-sized drone that uses an AI-based facial recognition system to identify human targets and kill them by penetrating their skulls.

A swarm of micro-drones is released from a vehicle flying to a target school, killing young people one after another as they try to flee.

The NGO warns that AI-based weapons may be used as a tool in terrorist attacks, not just in armed conflicts between states.

This video is a complete fiction, but there are moves toward using swarms of such drones in actual military activities.

In 2016, the US Department of Defense tested a swarm of 103 AI-based micro-drones launched from fighter jets. Their flights were not programmed in advance. They flew in formation without colliding, using AI to assess the situation for collective decision-making.

Radar imagery shows a swarm of green dots — drones — flying together, creating circles and other shapes.

An arms maker in Russia developed an AI weapon in the shape of a small military vehicle and released its promotional video. It shows the weapon finding a human-shaped target and shooting it. The company says the weapon is autonomous.

The use of AI is also eyed to be applied to the command and control system. The idea behind it is to have AI help identify the most effective ways of troop deployment or attacks.

The United States and other countries developing AI arms technology say the use of fully autonomous weapons will avoid casualties of their service members. They also say it will also reduce human errors, such as bombing wrong targets.

Warnings from scientists

But many scientists disagree. They are calling for a ban on autonomous AI lethal weapons. Physicist Stephen Hawking, who died last year, was one of them.

Just before his death, he delivered a grave warning. What concerned him, he said, is that AI could start evolving on its own, and “in the future, AI could develop a will of its own, a will that is in conflict with ours.”

There are several issues concerning AI lethal weapons. One is an ethical problem. It goes without saying that humans killing humans is unforgivable. But the question here is whether robots should be allowed to make a decision on human lives.

Another concern is that AI could lower the hurdles to war for government leaders because it would reduce the war costs and loss of their own service men and women.

Proliferation of AI weapons to terrorists is also a grave issue. Compared with nuclear weapons, AI technology is far less costly and more easily available. If a dictator should get access to such weapons, they could be used in a massacre.

Finally, the biggest concern is that humans could lose control of them. AI devices are machines. And machines can go out of order or malfunction. They could also be subject to cyber-attacks.

As Hawking warned, AI could rise against humans. AI can quickly learn how to solve problems through deep learning based on massive data. Scientists say it could lead to decisions or actions that go beyond human comprehension or imagination.

In the board games of chess and go, AI has beaten human world champions with unexpected tactics. But why it employed those tactics remains unknown.

In the military field, AI might choose to use cruel means that humans would avoid, if it decides that would help to achieve a victory. That could lead to indiscriminate attacks on innocent civilians.

High hurdles for regulation

The global community is now working to create international rules to regulate autonomous lethal weapons.

Arms control experts are seeking to use the Convention on Certain Conventional Weapons, or CCW, as a framework for regulations. The treaty bans the use of landmines, among other weapons. Officials and experts from 120 CCW member countries have been discussing the issue in Geneva. They held their latest meeting in March.

They aim to impose regulations before specific weapons are created. Until now, arms ban treaties have been made after landmines and biological and chemical weapons were actually used and atrocities were committed. In the case of AI weapons, it would be too late to regulate them after fully autonomous lethal weapons come into existence.

International officials and experts are discussing regulating autonomous lethal weapons but haven’t reached a conclusion.

The talks have been continuing for more than five years. But delegates have even failed to agree on how to define “autonomous lethal weapons.”

Some voice pessimism that regulating AI weapons with a treaty is no longer viable. They say as talks stall, technology will make quick progress and weapons will be completed.

Sources say discussions in Geneva are moving toward creating regulations less strict than a treaty. The idea is for each country to pledge to abide by international humanitarian laws, then, create its own rules and disclose them to the public. It is hoped that this will act as a brake.

In February, the US Department of Defense released its first Artificial Intelligence Strategy report. It says AI weapons will be kept under human control, and be used without violating international laws and ethics.

But challenges remain. Some question whether countries will interpret international laws in their favor to make regulations that suit them. Others say it may be difficult to confirm that human control is functioning.

Humans have created various tools that are capable of indiscriminate massacres, such as nuclear weapons. Now, the birth of AI weapons that could be beyond human control are steering us toward an unknown dangerous domain.

Whether humans will be able to recognize the possible crisis and put a stop to it before it turns into a catastrophe is critical. It appears that human wisdom and ethics are now being tested.

Artificial-intelligence pioneers win $1 million Turing Award

To learn who’s taking home the Turing Award, people might turn to their trusted talking bots, like Siri or Alexa. Or, in fact, some of the very technology the three winners helped bring to life.

Yoshua Bengio, Geoffrey Hinton and Yann LeCun have earned what’s often referred to as the Nobel Prize of the tech world for their pioneering work in artificial intelligence, the Association for Computing Machinery announced Wednesday. The researchers, working both independently and together, helped advance the thinking and application of neural networks, the technology that gives computers the ability to recognize patterns, interpret language and glean insights from complex data.

“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” Cherri Pancake, president of the computing society, said in a statement. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation.”

The trio’s efforts to popularize algorithms that extract patterns in data was initially met with skepticism, the association noted, but their commitment to artificial-intelligence research has led to breakthroughs in many areas of computer science, including speech recognition, robotics and the ways in which machines interpret digital images and videos.

The process of recognizing languages, environments and objects that billions of smartphone users rely on stems from the work of Bengio, Hinton and LeCun. Their research is poised to fuel further advancements as entire industries embrace artificial-intelligence systems, potentially transforming transportation, medicine and commerce.

AI-powered technologies could unlock a future with autonomous cars or earlier and more accurate medical diagnoses.

However, the advancement of artificial intelligence has also prompted concerns over mass automation and the displacement of human workers.

LeCun is a mathematical sciences professor at New York University and the vice president and chief AI scientist at Facebook. Hinton is a vice president and engineering fellow at Google. Bengio is a professor at the University of Montreal and the scientific director of both Quebec’s Artificial Intelligence Institute and the Institute for Data Valorization.

The Turing Award comes with a $1 million prize, funded by Google, the ACM said. The prize is named after the British mathematician Alan Turing, who laid the theoretical foundations for computer science.

AI, Internet Policy Proposals Signal Shift Away From Self-Regulation

Initiatives in Europe could change the way companies use the internet and AI to do business

Waves of new technology policy initiatives from the European Commission and U.K. government reflect the end of an era of self-regulation by the tech sector and the emergence of a new oversight model in which lawmakers and the public have more of a say over the way data and algorithms are deployed.

The European Commission plans to test ethical guidelines for the use of artificial intelligence in a pilot project planned for this summer. The U.K. proposes to create a regulatory body to force internet companies to remove harmful content from their sites. Policy advocates and technology experts said these efforts, along with Europe’s General Data Protection Regulation privacy regulation that went into effect last year, will change the way companies use the internet and AI to do business.

Dean Harvey, a partner with Perkins Coie LLP law firm, said the legal and regulatory obligations will come in several waves. The first to be affected will be the technology developers who are creating the algorithms. The second wave will be the companies that use the algorithms with customer data that contains personally identifiable information. “You’ll have to put in safeguards to ensure you’re handling data and developing AI models in an ethical manner,” he said.

The second wave will wash over companies in retail, marketing and finance, for example, that could be penalized if their business practices are influenced by biased or discriminatory computer programs. Further down the line are businesses that use AI without human data at all—for example, manufacturing companies that use machine learning for predictive maintenance of equipment or trucking fleets.

The European Commission this summer plans to test ethical guidelines for how artificial intelligence is used. PHOTO: YURIKO NAKAO/BLOOMBERG NEWS

Just as corporations have developed intricate practices to cope with regulation in industries such as finance and energy, they will need to create governance mechanisms to cope with the onslaught of new tech rules. Here are some steps:

• Get ready for the chief ethics officer. While existing corporate structures in legal, compliance and auditing will play a role, many corporations will need a new leader specifically focused on ethics. Thomas Creely, an associate professor in the college of leadership and ethics at the U.S. Naval War College, said hiring ethicists are going to be vital because “the dilemmas are going to be constant and complex.”

In situations where profits and ethics may be at odds, businesses should be prepared to act in the long-term interests of the corporation and the brand. “Some highly accurate models may need to be jettisoned for less performant, but more ethical or transparent, ones,” said Brandon Purcell, principal analyst with Forrester Research Inc.

• Vet vendors and corporate partners. Natasha Duarte, a policy analyst with the nonprofit Center for Democracy and Technology, said nontech companies will need to more thoroughly evaluate third-party AI vendors to find out, for instance, what data their algorithms were trained on and what safeguards are in place for data privacy.

• Keep meticulous records. Business may need to adopt policies to report their AI activities. Such reporting could be to the board of directors, to shareholders, to an industry group or to a regulatory agency. “Adopt leading practices that will help mitigate the inherent risk with AI with respect to explainability and bias,” said Martin Sokalski, global leader for emerging technology risk at KPMG LLP.

• Educate management. Companies could be held accountable for actions around AI, so business leaders need to become better educated on the powers and limitations of AI, said Illah R. Nourbakhsh, professor of Ethics and Computational Technologies in the Robotics Institute at Carnegie Mellon University. “There needs to be more AI fluency among business leaders.”