Cointelegraph has reached out to Nicu Sebe, Head of AI at Humans.ai and machine learning professor at the University of Trento Italy, to get his opinion on the ban Italy has imposed on ChatGPT.
Recently, Italy made history by becoming the first Western country to ban ChatGPT, the Large Language Model that has dominated the AI headlines for months. The ban is the result of a data breach and inadequate transparency concerning how OpenAI, the company behind the AI-powered chatbot, is collecting and managing data. The news of the ban has sent shockwaves through the AI community, triggering a large-scale debate concerning the future of AI regulation and innovation.
The Italian Data Protection Authority (IDPA) urged OpenAI to stop processing Italian ChatGPT users’ data until the company complies with the General Data Protection Regulation (GDPR), the Regulation in the European Union’s law on data protection and privacy in the EU space and the European Economic Area. The IDPA warns about a data breach that made public sensitive information like user conversations and payment data. The institution also drew attention to the low levels of transparency exhibited by OpenAI, underlining the questionable legal basis concerning how personal data is gathered and used to train ChatGPT. As such, the IDPA has made the controversial decision to ban the AI chatbot, which brought to light the underlying issue, namely the implications of AI regulations for innovation, privacy and ethics.
Opinions regarding the ban instituted by Italy seem to be divided among tech experts, some criticizing the decision, while others see it as a justified measure that creates meaningful precedence that should stimulate not only OpenAI but every major tech company to showcase how personal information is handled in AI models and other online services.
Humans.ai’s Head of AI, Nicu Sebe, declared to Cointelegraph that there is always a race between the development of technology and its correlated ethical and privacy aspects. Technology and regulatory bodies have always been in a constant race with each other, technology usually being the one setting the tempo and one step ahead. Sebe pointed out this fact by underlining that the race between technology and regulators most of the time isn’t synchronized, and, in this case, AI has taken the lead, although he thinks that ethics and privacy aspects are steadily closing the gap. From his point of view, the ban is “understandable”, and it will give OpenAI time to “adjust to the local regulations regarding data management and privacy.”
Sebe comments that for the EU countries that do not enforce the GDPR, the ban imposed by Italy creates a “framework in which these countries should consider how OpenAI is handling and using consumer data”. One cause of the ban is the discrepancy between Italian law regarding data management and what companies are “usually being permitted in the United States”.
It becomes increasingly clear that AI companies need to adjust their game plan, at least in the European space, concerning how they can perform a balancing act between providing their services to users while also falling in line with regulators. As such, the thorny question that emerges is how they can facilitate privacy and ethics while developing their products without slowing down innovation. Experts agree that there is no straightforward answer, but a majority of them point out that increasing transparency, user control, and facilitating collaboration with the government to achieve compliance or going open source are steps in the right direction.
The Humans.ai representative goes on to recall a similar event that has made headlines recently, the open letter, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month stop in developing more powerful AI solutions to enable a time window for introspection regarding the potential repercussions of these types of technologies. In his opinion, the letter raises a series of compelling concerns, but the six-month pause is “unrealistic”, explaining that “To balance the need for innovation with privacy concerns, AI companies need to adopt more stringent data privacy policies and security measures, ensure transparency in data collection and usage, and obtain user consent for data collection and processing.”
Artificial intelligence has developed at an exponential rate in the past years, which has increased this technology’s hunger for more data, which inevitably started to raise concerns regarding privacy and surveillance. In Sebe’s view, AI pioneers should bear “an obligation to be transparent about their data collection and usage practices and to establish strong security measures to safeguard user data.” Besides privacy and surveillance, other ethical concerns are potential biases, accountability and transparency, as the academician highlights as a concluding remark about AI system systems “have the potential to exacerbate and reinforce pre-existing societal prejudices, resulting in discriminatory treatment of specific groups”.
Ethical AI refers to the design, development, and deployment of AI systems aligned with ethical principles and values that prioritize the well-being of people as individuals and society as a whole. Since AI technologies are becoming more advanced and widespread, there is growing recognition from the AI community and leaders of the tech industry of a need to ensure that these technologies are developed and used in a way that aligns with ethical norms and values.
The development of ethical AI is also driven by the increasing awareness of the potential risks and harms associated with AI, including biases and discrimination, breaches of privacy and security, and the potential for AI to be used in ways that are harmful to humans and the environment, like disinformation, human abuse, or political suppression.
As a result, there is a growing demand for ethical AI frameworks, standards, and regulations, as well as for AI systems designed and developed with ethical considerations in mind. Many companies, organizations, and governments are investing in the development of ethical AI and incorporating ethical guidelines and principles into their AI strategies and practices.
In general, ethical AI promotes a set of fundamental principles and values:
The tech industry needs to design and develop AI systems that ensure fairness and prevent bias and discrimination. Mechanisms need to be set in place to help hold accountable the people responsible for the development and deployment of these systems as well as individuals who use the technology unethically. This aspect is crucial because AI systems can perpetuate and amplify biases and discrimination that exist in society, potentially harming individuals and groups of people.
AI systems need to be transparent and explainable, with clear documentation and communication concerning how they are designed and how they make decisions. Individuals, authorities and other organizations need to be able to look into the inner workings of the technology to identify the cause of potential errors or design faults that may generate potential issues or biases.
Ethical AI ensures that individuals’ privacy and security are respected and protected, which is particularly important given the sensitive nature of the data often used to train and deploy AI systems.
Ethical AI prioritizes the well-being and interests of humans, ensuring that AI systems are designed and used in a way that promotes human growth and development and does not cause harm. To achieve these goals, AI needs to be the subject of ongoing evaluation and monitoring.
The organizations that develop AI need to take into account the potential impact of AI systems on society and the environment and should be designed and deployed in a way that promotes social and environmental sustainability.
In the long run, ethical AI is likely to play an increasingly important role in the development and deployment of AI technologies. As we seek to ensure that AI is developed and used in a way that promotes the well-being of individuals and society as a whole. By promoting values like fairness, transparency, privacy, security, human-centeredness, and social and environmental responsibility, AI companies and researchers can help ensure that this technology benefits humanity rather than cause harm.