(Transcribed with OpenAI Whisper)
About the deepfake tech. Behind the bogus Taylor Swift images. The world is awash in deepfakes, video, audio or images that make people appear to do or say things they didn't, or be somewhere they weren't. Many are devised to give credibility to falsehoods and damage the reputations of politicians and other people in the public eye. But most deepfakes are explicit videos and pictures concocted by mapping the face of a celebrity onto the body of someone else. That's what happened in late January, when fake explicit images of pop star Taylor Swift cascaded across social media. Now that artificial intelligence allows almost anyone to conjure up lifelike images and sound with a few taps on a keyboard, it's getting harder to tell if what you see and hear online is real. What happened to Taylor Swift? The phony images of Swift were widely shared on social media sites, drawing the ire of her legions of fans. One image shared on X, the site formerly known as Twitter, was viewed 47 million times before the account was suspended, the New York Times reported. Swift said it was working to remove all identified images and would take appropriate action against those who posted them. Swift was also among the celebrities whose voices and images were manipulated into appearing to endorse commercial products, a popular brand of cookware, in Swift's case. 2. Where else have deepfakes been in the news? Earlier in January, Xochitl Gomez, a 17-year-old actress in the Marvel series, spoke out about finding sexually explicit deepfakes with her face on social media and not succeeding in getting the material taken down, NBC News reported. Deepfakes are also popping up in the 2024 U.S. presidential election. New Hampshire residents received a robocall before the state's presidential primary that sounded like President Joe Biden urging them to stay home and save your vote for the November election. The voice even uttered one of Biden's signature phrases, what a bunch of malarkey. 3. How are deepfake videos made? They are often crafted using an AI algorithm that's trained to recognize patterns in real video recordings of a particular person, a process known as deep learning. It's then possible to swap an element of one video, such as the person's face, into another piece of content without it looking like a crude montage. The manipulations are most misleading when used with voice-cloning technology, which breaks down an audio clip of someone speaking into half-syllable chunks that can be reassembled into new words that appear to be spoken by the person in the original recording. 4. How did deepfake technology take off? The technology was initially the domain of academics and researchers. However, Motherboard, a Vice publication, reported in 2017 that a Reddit user called deepfakes had devised an algorithm for making fake videos using open-source code. Reddit banned the user, but the practice spread. Initially, deepfakes required video that already existed and a real vocal performance, along with savvy editing skills. Today's generative AI systems allow users to produce convincing images and video from simple written prompts. Ask a computer to create a video putting words into someone's mouth and it will appear. The digital forgeries have become harder to spot as AI companies apply the new tools to the vast body of material available on the web, from YouTube to stock image and video libraries. 5. What are some other examples of deepfakes? Chinese trolls circulated manipulated images of the August wildfires on the Hawaiian island of Maui to support an assertion that they were caused by a secret weather weapon being tested by the US. In May 2023, US stocks dipped briefly after an image spread online appearing to show the Pentagon on fire. Experts said the fake picture had the hallmarks of being generated by AI. That February, a manufactured audio clip emerged with what sounded like Nigerian presidential candidate Atiku Abubakar plotting to rig that month's vote. In 2021, a minute-long video published on social media appeared to show Ukrainian President Volodymyr Zelensky telling his soldiers to lay down their arms and surrender to Russia. 6. What's the danger here? The fear is that deepfakes will eventually become so convincing that it will be impossible to distinguish what's real from what's fabricated. Imagine fraudsters manipulating stock prices by producing forged videos of chief executives issuing corporate updates, or falsified videos of soldiers committing war crimes. Politicians, business leaders and celebrities are especially at risk, given how many recordings of them are available. The technology makes so-called revenge porn possible even if no actual naked photo or video exists, with women typically targeted. Once a video goes viral on the Internet, it's almost impossible to contain. An additional concern is that spreading awareness about deepfakes will make it easier for people who truly are caught on tape doing or saying objectionable or illegal things to claim that the evidence against them is bogus. Some people are already using a deepfake defense in court. 7. Is anything being done about it? The kind of machine learning that produces deepfakes can't easily be reversed to detect them. A handful of startups such as Netherlands-based Sensiti AI and Estonia-based Sentinel are developing detection technology, as are many big US tech companies. Intel Corporation launched a fake catcher product in November 2022, which it says can detect faked video with 96% accuracy by observing the subtle color changes on the subject's skin caused by blood flow. Companies including Microsoft Corporation have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake. US state legislatures have moved faster than Congress has to tackle the immediate harms of AI. Several states have enacted laws that regulate deepfakes, mostly in the context of pornography and elections. A proposed European Union AI Act would require platforms to label deepfakes as such. The reference shelf. Related quick takes on generative AI and AI regulation. How Google and Microsoft are supercharging AI deepfake pornography. Bloomberg Law says US regulators are wrestling with how to stop deepfakes affecting the 2024 presidential election. A Bloomberg video about Lyrebird, the AI company that puts words in your mouth. Research from University College London suggested humans were unable to detect more than a quarter of deepfake audio recordings.
ey Points on Deepfake Technology and Its Impact:
-
Pervasiveness of Deepfakes: Deepfake technology, which creates realistic videos, audio, or images that falsely depict people saying or doing things they haven't, is becoming increasingly common. These are often used to spread misinformation or damage reputations.
-
Incident Involving Taylor Swift: Fake explicit images of Taylor Swift circulated on social media, one of which received 47 million views on a site formerly known as Twitter before being taken down. Swift's team worked to remove the images and take action against the posters. She was also falsely depicted endorsing commercial products.
-
Deepfakes in Politics and Entertainment: Deepfakes have appeared in various contexts, including the 2024 U.S. presidential election and in creating explicit content using celebrities' images, like the case of actress Xochitl Gomez.
-
Creation of Deepfakes: Deepfakes are produced using AI algorithms trained on real video recordings (deep learning). They can manipulate elements like a person’s face or voice in a video without obvious signs of editing.
-
Origin and Evolution of Deepfake Technology: Initially a domain of academics, deepfake technology became more accessible when a Reddit user named 'deepfakes' developed an algorithm using open-source code. The technology has evolved to where generative AI systems can produce convincing media from simple prompts.
-
Examples of Malicious Use: Deepfakes have been used for various deceptive purposes, such as spreading false information about the Hawaiian wildfires and the Pentagon being on fire, or manipulating audio clips in political contexts.
-
Risks and Consequences: The technology poses risks like market manipulation, war crime falsifications, and creating non-consensual pornographic content. It also raises concerns about the "deepfake defense," where individuals caught in compromising situations claim the evidence is fabricated.
-
Detection and Prevention Efforts: Detecting deepfakes is challenging, but companies and startups are developing technologies for this purpose. Intel's 'fake catcher' and Microsoft's digital watermarking are examples of these efforts. Legislation in the U.S. and the proposed EU AI Act are aimed at regulating deepfakes, especially in pornography and elections.
In summary, deepfake technology presents a growing challenge in various sectors, from politics to entertainment, with significant implications for privacy, security, and misinformation. Efforts to detect and regulate deepfakes are underway, but the technology's accessibility and convincing nature make it a persistent threat.