AI’s potential to improve the quality of our lives is unquestionable. On the other hand, there is a real risk that AI development may lead to inequalities and divides across many demographics. Inclusion, therefore, is an important matter for AI future. Amar Shar and Sandra Cortesi of Harvard explained why in a recent article published by the Harvard Business School Digital Initiative.
“There are pressing questions around discrimination, transparency and accountability, as well as privacy and safety of those who are using AI emerging technologies”. The article includes cases of exclusion and bias in AI applications and calls for more attention to address them. “Some global institutions are beginning to examine how AI can impact and contribute to the social good, but there is much work to be done”. The article highlighted the efforts by the Berkman Klein Center for Internet & Society, as an example, whose goal is to broaden the dialogue and engage many stakeholders, in particular those most likely biased.
In the same spirit, AI World Society (AIWS) is also making serious efforts toward equal treatment of people in AI. We strongly believe that the impact of AI must be thought of on a human scale, hence our initiative to examine the human side of AI and build an AI ethical framework, standards and models for the management and governance of AI, the development of AI products, and AI education.