I’m driven by two realities today. First, we’re generating more data than ever before—we’re on track to quadruple the current amount of data by 2025. And second, data is handing us an amazing opportunity to make our world a better place. Data fuels artificial intelligence (AI) technologies that are changing the face of healthcare, climate research, community resilience, and even space. AI can literally save the world.
Look at the role AI has played in our COVID-19 response, from tracing to variant prediction to rapid vaccine development. Consider the breakthroughs in improved cancer diagnostics thanks to AI revealing subtle patterns that humans can’t perceive. Notice the momentum in outer space, where my team is working on AI for autonomous satellite failure detection that will support ground operators as they carry out missions thousands of miles away. We haven’t even touched on how AI is driving climate research.
All these innovations in AI are based on massive amounts of data. Algorithms depend on data to “learn” and produce reliable outputs that we use to better understand our world and devise solutions for some of our greatest challenges. The wealth of data that’s fueling rapid progress, however, is under threat of being cut off if we don’t fix a mega problem in tech: trust.
If we’ve learned anything about digital technologies this past decade, it’s that conversations about ethical practices haven’t kept pace. People are concerned, rightfully so, about how their information is tracked.
The issue has grabbed national attention. Notable documentaries, news articles, and congressional hearings expose our failings to ensure transparency and ethical standards for data collection, AI, and automation. Some of the most prominent issues include unintended bias in algorithms, job replacement, and implications of social media’s influence on our behaviors and information flow.
Now’s the time for us to get smarter—as consumers, technologists, and policymakers—before data abuses undermine the promise of AI. Without this three-pronged approach, we’re eroding trust in the very technology that can help us solve our most urgent issues.
The article was originally published here.
To support for AI Ethics, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the ethical values and help people achieve well-being and happiness, as well as solve important issues, such as SDGs. Regarding to AI Ethics, AI World Society (AIWS.net) initiated and promoted to design AIWS Ethics framework within four components including transparency, regulation, promotion and implementation for constructive use of AI. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.