An international team of computer scientists, philosophers, religion scholars, and others is building computer models that can assist in carrying out experiments on policy making by creating virtual people to adopt with the policy.
In politics, it is difficult to tell whether a policy works out, when the decision is made. The only thing can be done is hope that it will be a success, if it isn’t, it cannot be undone. Now, the scenario is possible with the help of AI.
Over the past three years, Boston’s Center for Mind and Culture, and the Virginia Modeling, Analysis, and Simulation Center, as well as the University of Agder have been running a project on AI assessor. Experiments of religion-based policy are available with the Modeling Religion Project. It involves computer models populated with virtual people, or “agents”. The agents mimic the attributes and beliefs of a real country’s population by using data collected from that country.In addition, they are also trained on a set of social-science rules about how people tend to interact under various circumstances. The project gives politicians an assessment tool for policy making options in order to come up with the most decision.
Despite its usefulnes, it is the social problem the reserchers aim to deal with, as a mistake could lead to unintended consequence. It I, therefore, it is necessary to work on this with transparency and speak out about inherent ethical risks. This is in accordance with the first layer’s issue in the AIWS 7-layer model for AI ethical standards—its design and performance must be sufficiently transparent.