Fake photos and videos are a common occurrence nowadays. It seems like everyone can download an app on their smartphone and start altering photos or videos to create the next new viral meme, which grants their creator their fleeting 15 minutes of internet fame. Of course, it’s all done in good humour, to elicit a laugh from our online friends and other anonymous members of the online community. But there is a very fine line between amusing, harmful, and even potentially dangerous content.
It seems that for as long as photographs and films have existed, there have been individuals who fabricated forgeries to entertain people with ghosts caught on camera and other fantastic beings or to deceive with fake evidence. History has often taught us that in the hands of the wrong people, the ability to alter photos and videos is synonymous with manipulating the truth. This issue is becoming more pressing as advances in technology and the propagation of vast quantities of data through the internet, which has become a veritable information highway, means that every type of content, regardless if it’s fake or true, can spread around the entire globe in a matter of minutes if not even seconds.
Long are the days when images could be altered only by people skilled in image and video editing software. For some time now, a new breed of fake content has started to circulate on the web, and it’s no longer man-made.
Depending on how you want to perceive it, deepfakes can be an almost inexhaustible source of amusement, for example, people have begun inserting Nicolas Cage into every movie on the planet, but at the same time, deepfake technology has the potential to be a weapon that can be used to take cyberbullying to an extreme level, destabilize companies and even turn the tides of elections. The genie is out of the bottle folks, and as expected, once out, try as hard as you can, but you won’t be able to put him back in.
How do we control deepfakes, or at least how do we ensure their ethical accountability?
Humans.ai proposes a straightforward answer to the deepfake dilemma, and it involves targeting it directly at its roots, namely at the technology that makes deepfakes possible in the first place, AI. As a company focused on developing cutting-edge AI-based solutions and media, we are more than aware that AI enables us to do great things, but at the same time, it has the potential to harm if it is used unethically.
Weaponizing AI through facial recognition and location tracking, discrimination against certain population segments during automated recruiting processes, internet search results, or data misuse to generate deepfakes, fake news, altered voices, or faces is no longer the stuff of James Bond movies or dystopian science fiction novels. It has become a reality that we must contend with. As such, we take a stance and aim to address the lack of accountability for AI and its impact on the world.
The first step towards achieving our vision is to place AI under the close supervision of real humans who monitor how the technology is being used and if it respects the rigorous Humans.ai ethical standards, meaning that AI can never be used outside its intended purpose, to create harmful media that denigrate people’s image, ideas or beliefs.
Secondly, we have created a new intelligent asset class named AI NFTs that goes beyond the standard iteration of an NFT by integrating an AI component inside them, as well as a person’s unique digital genome which can be composed of voice, face, posture, way of being and other biometrics unique to an individual. Blockchain is unique through its ability to ensure granular traceability and ownership of the information it stores, as well as data immutability and integrity, which means that once introduced into the network, the information cannot be altered.
By making use of these properties, we ensure that each human is the true owner of their digital genome which they can use as a building block to develop new AI products or enable other people to use the data in their projects.
Lastly, by combining two of the most innovative technologies present on the market, blockchain and AI, we developed Proof of Human, a multifaceted governance, verification, and consensus mechanism that is designed to ensure that the data encapsulated inside AIs can be used only according to a set of rules predetermined by its creator/owner. As you may have already guessed, deep fakes are built using large quantities of data of an individual, namely pictures, video, and voice. And in the near future, as this biometric data is stored inside AI NFTs, it would be impossible to create deepfake content without the consent of the owner of the AI NFT and of the human validators, who are an integral component in the Proof of Human mechanism, responsible for verifying every request sent to an AI NFT.
Now that you know how we plan to keep them ethical, let’s take a closer look at deepfakes
Early examples of this technology were used primarily by Hollywood for the movie industry
Deepfakes are a subtype of synthetic media that focuses primarily on altering a person from an existing image or video to make them do things or say something they would never say or even completely replace people with someone else’s likeness. Taking a closer look at the word itself tells us more about this technology. Deepfake is a portmanteau of “deep learning”, which underlines the technology that is used to generate this type of content, and “fake”, which denotes the end product, altered images, or videos that are edited to appear to be real. Over the last couple of years, the technology behind deepfakes has become increasingly sophisticated and more accessible for people who wish to dabble in this kind of activity. The technology has evolved so much that it no longer requires high-end hardware to operate. The most rudimentary versions of this software are available as smartphone apps for free.
Although deepfakes surged in popularity relatively recently, the technology is much older than you would initially think. Early examples of this technology were used primarily by Hollywood for the silver screen. In 1994’s award-winning film “Forrest Gump”, filmmakers digitally inserted the titular protagonist into archival footage of John F. Kennedy, making the two interact.
Another example from the same year, which is closer to what we define today as deepfake, can be seen in the movie the Crow. During the early stages of filming, a tragic gun accident on set led to the death of Brandon Lee, who was playing the protagonist of the movie. The producers decided to digitally stitch the face of the late actor on a body double to finish the rest of the film. More recent examples of deepfakes in action in Hollywood consist of the de-ageing of actors like Samuel L. Jackson in the Avengers franchise, Robert De Niro in The Irish Man, or Mark Hamill when he donned the robe of Jedi master Luke Skywalker in the Mandalorian.
Since then, the technology has become more refined, and new techniques have emerged, which makes the creation of deepfakes open to a wider spectrum of aspiring content creators. The democratization of content creators creates fertile ground for new ideas. The problem is that without the proper safeguard mechanisms and checks and balances in place, we start to see a higher degree of unethical use of technology.
This is exactly what happened in 2017 when deepfakes came into the limelight
What makes deepfakes more convincing and dangerous is the fact that the images and videos are entirely generated by computers
Back then, the members of the online community called r/deepfakes subreddit started to experiment with the technology and began sharing pornographic videos that, at first glance, featured high profile female actors like Gal Gadot, Taylor Swift, and Scarlet Johansson, among others. It quickly became clear that by leveraging deepfake technology and techniques, the members of the group managed to produce much more convincing fake content.
What makes deepfakes more convincing and potentially more dangerous than traditional fake content that was performed manually by humans is the fact that the images and videos are entirely generated by computers through a technology called deep learning. Deep learning is a technology that relies on artificial neural networks that are designed to mimic the learning process that occurs inside an organic brain.
Repetition is the name of the game, and in the case of deep fakes, the computer changes the face of a person into another’s thousands of times in order to learn the best way to turn a certain input into the desired output. Face swapping works particularly well in deepfake content because regardless of the physiological differences between people, each face shares a basic outline of features and characteristics which can be taken as a guideline by the computer.
At the moment of writing, the tools available to make this kind of fake media need more polishing, but it becomes increasingly clear that with each year that passes, current limitations are slowly going away and that we are approaching a dangerous point in which seeing is no longer believing. As people are getting more versed in altering videos, our definition of the truth begins to erode.
With respect to this issue, deepfakes can spark major incidents in the short timeframe between when they get viral and when they are exposed as fake. People live in the now. They usually do not verify their sources which leads to the deepening of this problem. We are hard-wired to be drawn to the more outrageous interpretation of reality than the obvious one. The hard truth is that almost every single advancement in technology that has been created so far has been weaponized one way or the other. Deepfakes are no exception. It’s much easier to turn our minds against ourselves, and deepfakes are an ideal tool to do just that.
Potential for unethical content
“We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time. Even if they would never say those things”
An impressionist’s actor’s best friend
To start on a lighter note, let’s focus on the entertaining part of deepfakes. In the video above, we can see comedy actor Bill Hader take his iconic impressions one step further with the help of deepfake technology by morphing into Hollywood legends Al Pacino and Arnold Schwarzenegger. The Ctrl Shift Face team that made the deepfake manages to showcase the importance of having an actor capable of mimicking the mannerisms of your source in order to produce the best possible content. Although hilarious, it’s no denying that the end result is a bit unnerving.
Another great example is performed by actor/impressionist Jim Mersikem who recites the poem “Pity the Poor Impressionist” in the voice and with the face of 20 different celebrities like John Malkovich, Tommy Lee Jones, Cristopher Walken, and of course, Nicholas Cage with the help of content creator Shamook.
These two examples perfectly illustrate the talent and skills of the people behind the videos, but it also raises a scary question — what kind of deepfake a larger organization that has access to significantly more funds can produce?
A tool for political manipulation
“We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time. Even if they would never say those things”. Without a doubt, this is a statement that should make us reflect on the dangers of deepfakes and how this technology can be used to manipulate the truth. Also, former US president Barrack Obama never said those words. In the above video produced by BuzzFeed and comedian Jordan Peele, we are shown just how dangerous deepfake media can be if utilized in a political context. High-profile individuals are easy targets for deepfake creators as they usually come with an abundance of source material necessary to train the AI to produce high-fidelity fake content.
Although the following example isn’t necessarily a deepfake, it does a good example of showcasing how this technology can be misused in politics. In the video, Nancy Pelosi, the speaker of the US House of Representatives, appears to be slurring her words. The reality is that the video was slowed down by 25%, and the pitch was slightly altered to make Pelosi almost seem intoxicated. The video was debunked as fake, but it still saw widespread distribution, attracting many reactions including one by former New York City mayor Rudy Giuliani, who tweeted: “What is wrong with Nancy Pelosi? Her speech pattern is bizarre.”. Cases like this perfectly illustrate that during the time necessary to debunk a video as fake, the initial reaction can have an unexpected political backlash.
It should be noted that the emergence of deepfakes has also had an unexpected effect — it creates a loophole that enables anybody to call into question any video footage. This is what happened in 2018 with the former president of Gabon, Ali Bongo, who had been absent from the public space for some time, which led to rumours that he had died and the government was covering it up. To remove any doubts concerning his health, the president released a video to the public, which had an unexpected result. His political opponents claimed the video was a deepfake and launched, together with the military, a coup. In the end, it turned out the president was alive after all.
Social engineering manipulation
Contrary to popular belief, the number one threat in cybersecurity is represented by social engineering attacks like phishing, scareware, baiting, and spear-phishing which do not tamper with the computer system component, focusing instead on the human factor. Deepfakes have the potential to reshape the way in which malicious actors try to manipulate their victims by cloning the voice of friends, family, coworkers, and bosses. There already is a documented case of phishing done with audio deepfake.
In March 2019, the CEO of a UK-based energy company was persuaded over the phone to transfer over £200,000 by a person impersonating the CEO of his firm’s German parent company, who asked him to send the funds to a Hungarian supplier. In cases like this, the scammer only needs to push the right buttons to apply enough pressure to make the matter seem urgent and remove any doubts on the part of his victim.
Gaining access to data to train deepfakes isn’t as hard as you may think. As social media has become an integral component of our lives, we become unaware of the fact that we are the ones that are breaching our own privacy by giving people access to our photos and videos which reveal our family life, our preferences and friends that, when buddled together form an extensive portfolio about our person which can be used to train deepfakes and perform social engineering attacks. For regular people, deepfakes can, in fact, constitute a bigger threat because it would be much harder for them to prove their innocence to their peers if they get hold of a video in which they do something morally objectionable.
To bring a semblance of order and ethical accountability to how the AI of the future is created, used, and owned, Humans.ai has developed an AI development ecosystem that enables anybody, regardless of their technical background, to create AI at scale. To dissuade the unethical use of AI technology that propagates potentially harmful deepfake content, Humans.ai intends to give AI a human touch by putting it under close supervision of real human beings, who act as gatekeepers who closely monitor how the technology is being used, ensuring that it serves only ethical use cases.