As artificial intelligence (AI) companies seek ways to comply with ethical principles and requirements, blockchain could be seen as a means to ensure that AI is deployed in an ethical manner, under certain specific conditions, notes the European Parliament resolution on ethical aspects of artificial intelligence, robotics and related technologies.
As a Web 3.0 platform that brings together an ecosystem of stakeholders around the use of AI to create at scale, Humans.ai is one of the first players on the market to solve the AI ethics problem through its Proof of Humans technology, which ensures that each decision taken by an AI algorithm is controlled at a biological level by humans. Besides, privacy and surveillance, bias and discrimination, the human judgement stands as the major philosophical problem of our times when speaking about AI ethics.
Code of Ethics
AI ethics acts as a guideline, a system of moral principles and techniques that aims to support the development process and trustworthy use of artificial intelligence technology. As AI has become a constant presence in our daily lives, being integrated into products and services, organizations and researchers are starting to lay the foundation for an AI code of ethics.
Asimov laws of robotics
Long before technology became what it is today and Artificial Intelligence became possible in our lives, Isaac Asimov, one of the most influential science fiction writers, predicted the destructive potential of this tool and codified what is now universally known as The Three Laws of Robotics to draw attention on possible risks that AI may bring about.
The first law of robotics written by the science fiction writer puts people first, forbidding machines from harming humans or not acting when harm comes to humans .The second law states that robots are to obey humans to the extent in which their actions do not contradict the first law. The third law declares that robots are to protect their existence as long as the act of self-preservation does not breach the first two laws.
The technological advancement of the last decade and the imminent Web 3.0 revolution are pushing organizations and companies towards a new social paradigm, in which humans regain control of the social contract. An unexpected consequence of this shift in mentality is the fact that it has rekindled some old fears of mankind regarding intelligent machines.
We have already managed to create AI algorithms capable of learning at an incredible speed, but when ethics come into question, what happens when an AI exploits human flaws and starts to learn at an incredibly fast rate to do harm. Or at least this is one of our main fears concerning AI. As early as 2016, we have already seen what AI can do when it learns harmful behaviour from humans with Microsoft’s Tay, a chatbot designed to learn and interact with people on its own.
During the past years, we have witnessed a concentration of heated debates in the AI scholars community around the world, with specialists succeeding in drawing attention to principles by which technology can be controlled by humans. This is why humans return to Asimov’s laws of robotics, seeing them more as guiding principles.
The Asilomar AI Principles
Artificial Intelligence has already created solutions that appear in our lives, a series of tools through which humans can use technology to disrupt industries such as healthcare, finance, etc. Therefore, the need to establish a framework and a set of principles has become imperative.
One of the largest initiatives is The Future of Life Institute (FLI) and its universally recognized Asilomar AI Principles. We are talking about 23 guidelines grouped into 3 main categories: Research Issues, Ethics and Values and Longer-term Issues.
The initiative belongs to a group of tech brains and top researchers in robotics and AI. The project and especially the AI principles are supported by an overwhelming number of scientists, researchers and company founders. In addition to respected professors such as Nick Bostrom (Oxford), Alan Guth, (MIT), Martin Rees (Cambridge), Saul Perlmutter (Berkeley), George Church (Harvard) and many others, we also find among their ranks stars such as Morgan Freeman, Alan Alda or famous entrepreneurs like Elon Musk in the Scientific Advisory Board.
The 23 guidelines for AI principles
1. Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies.
3. Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5. Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and Values
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17. Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20. Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals and for the benefit of all humanity rather than one state or organization.
Where does blockchain fit in?
With these in mind, Humans.ai is creating a Web 3.0 infrastructure dominated by a consensus mechanism in which real humans play a crucial role as validators in a framework designed to keep AI honest. In other words, we are creating a new consensus mechanism called Proof-of-Human, which relies on humans to leverage their biometric data to prove that a specific AI is still under close biological supervision.
How do we do that? The answer lies in interoperability and synergy with blockchain technology. Based on the recommendations of the European Parliament, which adopted a resolution recommending the use of the blockchain as a foundation in the effort to maintain honest AI, Humans.ai offers creators the possibility of ownership over the digital asset itself.
The development of an all-in-one platform for AI will offer unprecedented creative possibilities, and blockchain technology is the solution to control this environment, the ultimate goal being to comply with AI ethics.
What is blockchain
The inherent characteristics of the blockchain make it possible to create an ecosystem based on trust between participants. Blockchain technology offers by default unwavering trust through the inbuilt validation and consensus process, which acts as an ideal framework for a control mechanism for our AI platform.
In essence, blockchain is a decentralized and distributed ledger of economic transactions that can be designed to record a wide range of data. The unique design features of blockchain technology make it an incorruptible data ecosystem that offers unparalleled levels of trust, traceability and transparency to the information it stores.
Blockchain with AI
Combining the power of blockchain with Artificial Intelligence, Humans.ai allows anyone anywhere to create and scale their potential to infinite. In order to involve real humans in a decentralized ecosystem governed by blockchain technology, the Web 3.0 company created a total supply of 7,800,000,000 $HEART tokens.
With a state of the art utility token at its heart, the next-generation blockchain infrastructure created by Humans.ai offers accessibility, accountability, transparency, and ownership.
The $HEART token plays a central role in the governance and payments system within the ecosystem. Humans.ai’s technology utilizes NFTs to recognize assets within its ecosystem (algorithms, data, AIs) and ensure contributions to delivering valuable outcomes are adequately accounted for and recognized.
Blockchain’s powerful set of features composed of data security, traceability, distribution, data integrity and immutability, comply smoothly with the audit processes which can act as an entry point for enhancing the ethical nature of algorithmic decision-making systems, bringing us one step closer to solving the AI Black Box problem.
Through its disruptive benefits such as transparency and accessibility, blockchain technology will help the public understand the decisions made by machine learning, contributing to the overall improvement of AI systems as a whole. In addition, blockchain audibility allows researchers to improve the ethical framework of AI and consolidate the way machines make decisions.
Finally, as the promise of a new world governed by the principles of a Web 3.0 society draws near, it’s becoming inevitable for innovative technologies like blockchain and AI to support each other. Blockchain, a somewhat peculiar technology that didn’t find its place in the market, besides its role as a crypto enabler, now serves AI companies to obtain trustworthy data directly from their creators through decentralized networks, preventing data manipulation in an ecosystem that works without intermediaries.
The role of blockchain technology thus becomes crucial for — as the Humans.ai business model shows — the AI ethics principles by providing users with incentives that encourage them to participate in validating the creative forms of artificial intelligence and share their data in a fair and easy manner.