A multi-part update series reviewing recent news, resources, and cases related to social media and the technical and legal challenges it creates in eDiscovery
In New Notifications, we reviewed updated social media usage statistics and other evidence of its growing evidentiary significance. In this Part, we discuss recent three areas of growing or potential challenges and related news stories.
Since we last revisited this topic, three aspects of social media evidence have become frequently-discussed areas of growing or potential challenges for practitioners: ephemeral messaging on the rise, more emoji in litigation, and deepfakes on the horizon.
Ephemeral messaging is messaging like email or text messaging but with added functionality that automatically deletes messages after they have been read or after a short period of time specified by the sender. It is offered as a feature or option by some social media services (e.g., Snapchat, Instagram), as well as by some dedicated apps and services (e.g., Wickr, Confide). Many such apps also take other steps to protect message confidentiality, such as applying end-to-end encryption and implementing anti-screenshot features.
Ephemeral messaging presents obvious challenges for eDiscovery preservation and collection. These apps are designed to auto-delete messages even more quickly than typical automated janitorial functions do, and most employ end-to-end encryption prior to deletion. If the app effectively deletes messages prior to collection, they may be unrecoverable, and even if not deleted, they may be unrecoverable without custodian cooperation due to the encryption.
Ephemeral messaging functionality was initially popularized by Snapchat, was quickly made the primary focus of more business-oriented applications like Wickr and Confide, and was later adopted as an option by Instagram and experimented with by Facebook. Ephemeral messaging apps are particularly popular with the young. A 2016 study showed that they were in use by 56% of smartphone owners ages 18-29. They are also growing in popularity among businesses. Uber made headlines in November 2017 when an employee testified about the company’s internal use of ephemeral messaging app Wickr, and Uber is not alone.
Ephemeral messaging has been in the news again recently, on three different fronts:
As social media communication channels (and smartphones) have become more frequent discovery sources, so too have emoji (or emoticons) shown up more frequently in cases. In 2019, Santa Clara University law professor Eric Goldman published “Emojis and the Law” in the Washington Law Review, which revealed that “[b]etween 2004 and 2019, there was an exponential rise in emoji and emoticon references in US court opinions, with over 30 percent of all cases appearing in 2018” [emphasis added]. Examples range from landlord disputes to sex trafficking cases. As they increase in frequency, emoji are creating special challenges for eDiscovery and litigation, both technical challenges (due to their differing appearances from platform to platform) and challenges of interpretation (due to their inherent ambiguity and their context dependency).
Another new source of potential complications in social media evidence is the rise of “deepfakes.” Deepfakes are “highly realistic, falsified imagery and sound recordings” in which the original faces and/or voices are replaced. They are not created using the types of 3D rendering “typically employed in Hollywood VFX studios” but, rather, using publicly-available machine-learning algorithms and source media (i.e., images, videos, and audio recordings). They have quickly progressed from illicit applications to viral social media amusements:
Like all video-adjacent technology, deepfakes first saw significant use in pornography, flooding the Internet with videos that to the untrained . . . eye look like celebrities doing porn. Otherwise, it’s mostly used for comedy, like putting Steve Buscemi’s face on Jennifer Lawrence, or swapping Arnold Schwarzenegger and Danny De Vito in Twins. Sometimes, deepfakers reinstate a role’s “original” casting (like Tom Selleck as Indiana Jones), bring actors like Bruce Lee back from the dead, or – in this latest case – ponder remakes of classics with Marvel Cinematic Universe stars.
Deepfake technology has the potential to create a variety of serious legal complications, including: defamation and false light claims from those depicted (including potential liability for duped journalists), defendants challenging the authenticity of video evidence against them (even when it’s real), and a new need for video analysis and expert testimony (and the costs that will come with both).
At least three states have already taken some legislative steps related to deepfakes. In July 2019, Virginia “officially expanded its nonconsensual pornography ban to include realistic fake videos and photos, including computer-generated ‘deepfakes.’” In September 2019, Texas attempted to criminalize deepfake election interference:
Texas Senate Bill 751 (SB751) amended the state’s election code to criminalize deepfake videos created “with intent to injure a candidate or influence the result of an election” and which are “published and distributed within 30 days of an election.” Doing so is now a class A misdemeanor and offenders can be sentenced to a year in a county jail and fined up to $4,000.
Finally, in October 2019, California passed two measures related to deepfakes: one that authorizes election candidates to bring civil actions against distributors of election-related deepfakes, and one that “that provides a plaintiff whose likeness was used in a computer-generated nude or sexual act video or image with the ability to obtain damages and other relief.”
Upcoming in this Series
In the next Part, we will continue our 2020 social media update series with a review of some recent cases of interest.