HamburgetLogo

Humans.ai responds with a solution to the open letter to pause giant AI experiments

Artificial intelligence seems to be on the ropes, as a tag team of tech industry leaders, researchers, and other influential people are placing this technology under scrutiny, asking in an open letter for six months of pause in the development of AI to carefully assess the potential risks associated with this technology. The letter in question was penned by the Future of Life Institute, a non-profit organization whose mission is to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life”.

Among the most prominent signatories of the letter are Twitter and Tesla CEO Elon Musk, Apple Co-Founder Steve Wozniak, Pinterest Co-Founder Evan Sharp, and historian and professor Yuval Noah Harari, to name a few. At the moment of writing, the letter is signed by over 5000 people.

Humans.ai ‘s vision for the future of artificial intelligence falls in line with some of the provisions conveyed in the open letter regarding the fact that artificial intelligence needs to be moulded into a technology that is “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal”. To make its vision of AI a reality, Humans is developing a state-of-the-art blockchain ecosystem capable of ensuring the secure storage of AI products, transparent governance, and fair monetization.

Interestingly enough, the open letter echoes an initiative that took place in 2015, also headed by the Future of Life Institute, which was backed at the time by a considerable number of tech heavyweights, including Bill Gates and Elon Musk, as well as renowned physicist Stephen Hawking. Similar to the new open letter, signatories supported the development of AI for the betterment of society while also asking for cautiousness regarding the potential dangers of this technology.

The letter starts by affirming that “AI systems with human-competitive intelligence can pose profound risks to society and humanity”, assessing that “contemporary AI systems are now becoming human-competitive at general tasks”, which according to the authors, generates a series of pressing questions that need answering:

· Should we let machines flood our information channels with propaganda and untruth?
· Should we automate away all the jobs, including the fulfilling ones?
· Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?
· Should we risk the loss of control of our civilization?

After this series of questions, the authors “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Although the open letter seems to build a lot upon the pre-existing negative hype that surrounds artificial intelligence through the ambiguous and apocalyptic connotations of the questions it forwards, it does manage to draw attention to a series of real issues that AI and the people that develop it need to answer like how AI systems are governed, audited, overseen, funded and controlled to ensure that it is used ethically.

This idea was hinted at during an interview with Reuters with Gary Marcus, professor emeritus of psychology and neural science at New York University, who signed the letter. "The letter isn't perfect, but the spirit is right: we need to slow down until we better understand the ramifications". "The big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize."

The Blockchain of AIs provides an answer to the AI question

Humans.ai is putting the finishing touches on the mainnet of the Humans Blockchain, the first blockchain from the Cosmos ecosystem able to store, execute and manage artificial intelligence. At the moment of writing, the Humans mainnet has enrolled 1000 international validators, who are working around the clock to maintain this unique framework designed to support AI during every aspect of its development lifecycle.

The Humans Blockchain is a Web 3 infrastructure built around a consensus mechanism called Proof of Humans that is centred on people who take on the role of validators who ensure that AI is kept honest and used ethically.

Although seemingly unrelated, at first glance, blockchain and artificial intelligence appear to be made for each other. Blockchain is well known for its ability to ensure trust, transparency, and consensus regarding its state and the state of the assets it stores. By bringing artificial intelligence to the blockchain, Humans.ai is providing a solution to pressing issues that revolve around AI technology like ownership, governance, and ethics.

New Technologies for AI Solutions
New Technologies for AI Solutions 2024 is gearing up to become the year of AI as OpenAI and Google showcase new technology May has undoubtedly...
A new AI Era: Agentic AI
A new AI Era: Agentic AI In the landscape of artificial intelligence, a new paradigm has emerged, promising to redefine the capabilities of autonomous systems. Termed...
AI Agents: A Journey into Next-Generation Intelligence
AI Agents: A Journey into Next-Generation Intelligence The business world is undergoing dynamic shifts, and the integration of cutting-edge technologies emerges as a pivotal force propelling growth...