We are focusing our attention on one of our colleagues and her research paper about artificial intelligence, which was peer-reviewed by the brilliant minds at the prestigious Imperial College of London (ICL). Meet Cristina Carata, the Academic Ambassador for Blockchain and AI at Humans.ai.
Our colleague Cristina has an academic background in computing and law and is currently a PhD researcher at Imperial College London, one of the world’s top 10 universities according to the recent QS ranking. Her academic path at the Centre for Cryptocurrency Research and Engineering from ICL follows an exciting interdisciplinary subject: binding together law and blockchain to create better regulation for cryptocurrencies and a new type of e-government with the help of blockchain and AI. Cristina’s academic expertise concerning regulating emerging technologies and integrating them for better governance has recently contributed to the growing success of ION, the world’s first governmental AI adviser.
In her research paper “From Citizens to Decision-Makers: Changing the Public Governance Paradigm with the Help of Artificial Intelligence”, Humans.ai’s Academic Ambassador for Blockchain and AI highlights that the widespread use of digital communication and social networks has changed how citizens perceive and react to societal and political changes. Social media comments offer insights into citizen satisfaction and public administration performance, emphasizing the need for tools to gather and utilize this information for policymaking.
An AI-based application, functioning as a governmental adviser, could capture and analyze citizen opinions from social networks, aiding decision-makers in understanding public expectations. In this context, AI has the potential to enhance democracy and policymaking, but it also raises questions regarding ethical considerations and responsible use with transparency and oversight.
We had the pleasure to sit down with Cristina and delve deeper into the topics presented in her research paper:
The ION research project, developed by the Romanian AI company Humans.ai in collaboration with researchers and professors in the field, started — and still is and will be — as a pro bono initiative aimed at bridging the communication gap between citizens and the government in the 21st century. Besides the technical aspects of the project, such as natural language processing and computational vision, a very important aspect of the academic research behind was aimed at the ethical challenges that cutting-edge technologies like artificial intelligence face on a daily basis. So, this paper can be considered as a natural consequence of a huge amount of research in the ethical field, so that the project can truly represent a step forward in e-government.
One can gain a wide array of insights from public comments on social media, starting with areas of improvement or public engagement as general topics and finishing with specific issues. Social media comments can provide an indication of how the citizens perceive the quality of the services provided by state institutions, whether the comments are positive or negative. Social media comments can help state institutions to engage with the public and address their concerns. By responding to comments and addressing issues raised by the public, state institutions can build trust and improve their reputation.
Returning to the subject of the paper, the two main concerns regarding the insights that derive from social media in a project like ION, are represented by bias in data that may alter the decisional process and the absence of adequate representation of marginalized groups. Although not the only two concerns from an ethical perspective, they can represent the turning point in the evolution of such projects.
There are quite a few roles that an application like ION can play in facilitating real-time communication between citizens and decision-makers. First of all, it can be a real-time monitoring tool allowing decision-makers to respond quickly to citizens’ concerns and needs.
Secondly, the sentiment analysis component can provide decision-makers with insights into citizens’ attitudes and emotions. This can help decision-makers to understand citizens’ concerns and needs and to tailor their policies and services accordingly. Also, AI can help decision-makers to identify patterns and trends in citizens’ feedback, providing valuable insights into citizens’ preferences and needs. This can help decision-makers to make more informed decisions and improve the quality of public services.
Last but not least, an AI-based advisor, like ION, can help decision-makers to be transparent about their decision-making processes by sharing statistics and insights gathered from social media posts with citizens. This can help to build trust and engagement between citizens and decision-makers.
Integrating citizens’ expectations into the decision-making processes of government institutions can return several benefits. Firstly, it can enhance citizen satisfaction by enabling decision-makers to gain a better understanding of citizens’ needs and expectations and subsequently tailor public policies accordingly.
Secondly, this practice can improve the transparency of the decision-making process, as it allows greater insight into citizens’ opinions and feedback. Supplementary, it can build more trust between citizens and decision-makers and promote a more open and accountable government.
Thirdly, incorporating citizens’ expectations can lead to better policy outcomes by providing decision-makers with more informed insights into citizens’ needs and preferences.
Finally, the integration of citizens’ expectations through this approach can be a cost-effective way to gather feedback from citizens, potentially reducing costs associated with traditional methods such as surveys or referendums.
The question has no straightforward answer due to the rapid advancement of AI technology, leading to the emergence of new ethical dilemmas with each passing day. Even without the integration of cutting-edge technologies like AI, policymaking remains a complex domain that inherently raises numerous ethical concerns. Among the challenges, we can name the potential for bias in the decision-making process due to biased data.
Moreover, the collection and use of large amounts of data required for the functioning of AI systems raises concerns about privacy and data protection. The accountability of AI systems is also challenging to establish, particularly when decisions are made automatically. It is, therefore, necessary to have human oversight to identify and address ethical and bias issues that may arise.
Lastly, the use of AI in policymaking could result in certain groups being excluded from the policymaking process due to the lack of access to the necessary technology. Policymakers must, therefore, ensure that they use AI in an ethical and transparent manner, with appropriate human oversight, to ensure that policies are fair, unbiased, and inclusive.
Human judgment plays a central role in serving as a critical safeguard to ensure the integrity of AI systems, especially in public administration. One of the most important roles of human judgment is to facilitate the identification and mitigation of biases in AI-based projects like ION.
Despite efforts to create unbiased algorithms, AI models can still perpetuate biases present in the data they are trained on. Human evaluators possess the ability to detect such biases and take measures to mitigate their impact on decision-making outcomes. In this way, human oversight helps maintain fairness and equity.
Moreover, the continuous monitoring and evaluation of AI systems by human experts play a crucial role in ensuring ongoing improvement. While AI algorithms are designed to learn and adapt to new data, they may encounter scenarios that were not adequately addressed during the initial training process.
Human oversight enables the detection of such limitations and allows for the refinement of AI models to enhance their performance and relevance over time. Human oversight also promotes responsible decision-making concerning the use of AI in public administration. This involves understanding AI systems’ limitations and potential risks and making informed choices about when and how to employ AI technology. Human evaluators can intervene if they detect potential ethical concerns, ensuring that the ultimate responsibility for critical choices remains with human agents.