Trust is a fundamental value of human society that acts as an enabler in all sorts of interactions. Regardless if it takes place between different individuals or between individuals and organizations, companies or other entities, every interaction is based on a relationship of trust. People trust that bank X will keep their money safe or that the service provider they pay will book their flights, hotel and everything they need without endangering their personally identifiable information and other relevant data.
But what happens when you remove people from the equation? How do you ensure that technology is trustworthy?
This question holds especially true for artificial intelligence, which has often been diagnosed with a” black box” syndrome due to the difficulty to understand its inner workings generated by a lack of transparency into how it works and how it is used, as well as other red flags like questionable accountability and privacy management.
A High-Level Expert Group on Artificial Intelligence set up by the European Commission formulated a set of guidelines for Trustworthy Artificial Intelligence, in which they outline how trustworthy AI should look like and what requirements AI systems need to fulfil to be considered trustworthy. The seven requirements highlighted in the paper address each aspect of AI, starting from its design, development lifecycle, deployment and usage:
Through its clever use of blockchain technology, Humans.ai is challenging the current status quo surrounding artificial intelligence. By leveraging blockchain as a foundation for its AI ecosystem, Humans.ai falls in line with the requirements outlined in the EU Commission’s report. The vision employed by Humans.ai distances itself from the “behind-closed-door” approach that has been predominant in the AI sphere for decades. Embracing a human-centred approach, in which every AI is closely monitored and supervised by real people, Humans.ai has made it its mission to lay the foundation for a new generation of AI based on transparency and cooperation.
What is trustworthy AI?
Trustworthy AI is based on the idea that trust is a fundamental value for societies, economies, sustainable growth and development. From this logic, we can extrapolate that AI as a whole will unlock its full potential only when trust is established during its entire lifecycle, from development, deployment and use.
According to the guidelines forwarded by the EU Commission, the inability to demonstrate the trustworthiness of AI systems and the people behind them can have unwanted consequences that may lead to the creation of future bottlenecks that prevent the realization of potentially vast social and economic benefits that this technology can unlock.
The paper underlines that for an AI to be trustworthy, it needs to meet three defining criteria:
To meet the above-mentioned criteria, the authors of the paper list seven requirements that AI systems need to check in order to be considered trustworthy. These requirements are designed to act as pillars of trust that apply to every segment present in the lifecycle of an AI model, also targeting the developers, data scientists, project managers, as well as end users.
Human agency and oversight
The value of AI systems stems from their ability to augment human work, making people essentially more effective when performing tasks or allowing them to create new content or artworks without requiring specialized knowledge or years of training. As such, the primary goal of AI systems is to support human objectives, growth, and agency, all while fostering fundamental rights.
Concurrently, AI systems need to be held under close observation through oversight mechanisms. The paper suggests that governance mechanisms like the human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach can provide the necessary means to monitor and regulate AI behaviour. Human-in-the-loop refers to the ability of humans to intervene in the decision cycle of AI systems. Human-on-the-loop is the capacity of humans to intervene in the design stage of the AI system and monitor its operation. Human-in-command is the capability of humans to supervise the activity of AI systems and the ability to decide what AI system to use in a particular situation.
Humans.ai has designed its AI ecosystem to guarantee human agency and inbuilt oversight mechanisms. The ecosystem built by Humans.ai leverages blockchain technology to put real humans behind every AI to monitor it and ensure its ethical use. The Humans Blockchain differs from traditional blockchain networks by employing an additional set of validators called workload nodes that are tasked with executing AI that resides in the Humans ecosystem, adding an additional layer of security and oversight.
The pièce de resistance of the Humans AI ecosystem is the Proof of Human (PoH) mechanism, a complex blockchain-based governance, consensus and verification system that ensures that every AI is backed by a human decision. In PoH, humans leverage their biometric data to prove that a specific AI is still under close biological supervision. Humans.ai empowers people to participate in the governance and management of any AI, essentially making sure that the objective of the AI is aligned with the human’s objective.
Technical robustness and safety
To be considered trustworthy, it’s paramount that AI systems are dependable and resilient in the face of cybersecurity attacks. Technical robustness means that AI systems need to employ a preventative approach to security risks, both intentional and unintentional. In order to unlock their full potential, AI systems need to be designed with an inbuilt fallback plan in case something wrong occurs while also being accurate, reliable and reproducible.
Blockchain is touted as one of the most secure and resilient infrastructures. Due to its inherent characteristics and features like decentralization, distribution, and ability to ensure data and immutability, blockchain acts as an ideal medium for data and applications, including artificial intelligence. The Humans Blockchain was tailored with security in mind to protect not only how AI is stored and executed but to support AI during every segment of its lifecycle.
Privacy and data governance
Closely associated with the principle of technical robustness and safety are privacy and data protection. Stopping privacy breaches necessitates adequate data governance mechanisms that ensure the quality and integrity of the information stored. Access mechanisms are also an integral component that prevents any privacy slippages.
The immutable nature of the Humans Blockchain, paired together with its ability to guarantee data integrity, makes it an ideal foundation for the AI of the future. Based on Cosmos SDK, the Humans Blockchain comes with a set of inherent design features and characteristics that aligns with Humans.ai’s goals of putting a human behind every AI while also solving pressing issues concerning the governance, monetization and execution of AI products.
An important pillar of trust is transparency. AI systems and their decisions should be transparent and able to provide visibility to every element of the system. Transparency is composed of elements like traceability, explainability and open communication about the limitations of the AI system.
The distributed nature of blockchain technology means that every network participant has access to the information stored. Furthermore, blockchain is an append-only structure which means that information is never deleted, meaning that each version of the information is stored, providing in-depth transparency and traceability. In the Humans Blockchain, everyone can see the AI models stored but cannot interact with them unless they have the approval of the owner.
Diversity, non-discrimination and fairness
To qualify as trustworthy, AI systems need to align and support society’s goals of inclusion and diversity while striving to minimize bias and treat humans with equity. Humans.ai’s mission is to put real human beings behind every AI decision, regardless of gender, race or political affiliation. One of the core beliefs of Humans.ai is that it takes the entire humanity to raise an AI, so there is no room for bias and discrimination. Furthermore, Humans.ai champions the idea that AI is the right instrument to fight prejudice and discrimination against certain groups or people.
Societal and environmental well-being
As stipulated in the previous requirement, AI should benefit all human beings, including future generations. As such, it must be sustainable and environmentally friendly. The impact of AI and the systems that support it need to take into account the environment, other living beings, as well as the overall social and environmental impact. AI is only valuable as an instrument that enhances the productivity of people, so it shouldn’t cause societal or environmental tension or make people feel obsolete.
Humans.ai adopts an inclusive approach to artificial intelligence, meaning that everyone, regardless of their technical prowess, is encouraged to help contribute to the creation of the AI of the future. The Humans AI ecosystem is built around humans, who play a central role in monitoring how AI is deployed and executed. In turn, Humans.ai uses blockchain to answer some of the most pressing issues that have been plaguing the field of AI, like how it is governed, monetized and executed. To adopt an environmentally friendly approach, the Humans Blockchain relies on the Proof of Stake (PoS) consensus mechanism, which is a more energy-efficient and environmentally friendly technology.
The last requirement for trustworthy AI forwarded in the paper is accountability which refers to the inclusion of mechanisms capable of guaranteeing responsibility and accountability for AI models and their outcomes. Accountability is closely related to risk management, identifying and mitigating risks transparently. This can be achieved by delegating to humans the role of AI supervisors.
One of the main goals of Humans.ai is to place real human beings behind the decisions taken by artificial intelligence to ensure fair and ethical use of AI. Due to its unique design choices, characteristics and features, blockchain is an ideal framework for ensuring accountability. This technology’s ability to ensure in-depth traceability and complete historical records of information make it an ideal medium for AI. The Humans AI ecosystem leverages blockchain technology to ensure accountability and traceability, providing an accurate view of how AI is developed, deployed and executed.