Machine-Altered Deepfakes: Establishing a Guidance for Ethical and Lawful Employment of Manipulated Digital Media
In the digital age, deepfakes - highly realistic and manipulated media - are increasingly being explored by various entities, including the US military. This development, while promising for psychological operations and disinformation campaigns, raises significant ethical, legal, and policy considerations.
A recent report by the United States Special Operations Command (USSOCOM) confirms their active exploration of deepfakes to influence and deceive foreign audiences and adversaries. This move, however, demonstrates a potential hypocrisy, as the US government has repeatedly warned about deepfakes destabilizing democratic societies.
One of the key concerns revolves around misinformation and the erosion of public trust. The military's use of deepfakes could contribute to a climate of misinformation and chaos, even if intended for legitimate purposes like psychological operations. This tension lies between the operational advantages and the moral responsibility to prevent destabilization and misinformation spread.
Legally, the use of deepfakes intersects with laws governing covert operations, protections on human subjects, and regulations on non-consensual synthetic media. Existing legal frameworks criminalize certain malicious uses of deepfakes, such as non-consensual intimate imagery or disinformation with security implications. The military must ensure compliance with such laws while facing challenges inherent to verifying real-time content and maintaining oversight.
From a policy standpoint, the use of deepfakes demands stricter oversight, clear guidelines, authentication tools, and media literacy promotion to counter adversarial exploitation and internal misuse. Policymakers emphasize the need for a balanced approach that harnesses deepfake technology’s tactical benefits while managing ethical risks and legal ambiguities through accountability and transparency.
Additional considerations include the risk of adversaries exploiting these tools for espionage, blackmail, and counterintelligence via detailed behavioral simulations. The development of authentication and detection technologies becomes pivotal for defense in such scenarios.
The first category of deepfakes, Living Person/No Consent (LPNC), likely includes the bulk of existing deepfakes in circulation and attracts the most substantial attention. A notable example is a series of deepfake videos created by a group of doctorate students at the University of California, Berkeley, depicting them dancing like professionals in 2018.
In the realm of national security, deepfakes could impede recruiting efforts for terrorist groups that rely on the internet to radicalize young men and women. A fake video of a deceased enemy combatant, such as Anwar al-Awlaki or Osama bin Laden, criticizing the performance of the group's rank-and-file fighters might be of use in confusing and countering violent extremists.
As we look towards future armed conflicts, commanders might use deepfakes to confuse the enemy and protect forces maneuvering to an objective. In a scenario in Ukraine, President Zelenskyy could continue to appear on the news, as if he were still alive, to issue guidance, comment on battlefield developments, and congratulate his soldiers for contemporary gains, even if he is killed.
Despite these potential benefits, it is not clear how the US military will balance the potential advantages of deepfakes with their harmful consequences. There has been little open debate about whether the US military should employ deepfakes in support of operations, underscoring the need for transparent discussions and robust oversight mechanisms.
In conclusion, the US military's potential use of deepfakes involves balancing operational utility against ethical imperatives and legal constraints, requiring robust oversight mechanisms, respect for international laws, and policies that uphold trust and legitimacy in information operations.
[1] Talbot Jensen, E., & Crockett, S. (2020). The Ethics and Law of Deepfakes in Military Operations. The Lieber Institute for Law and Warfare. [2] Garfinkel, S., & Baum, B. (2020). Deepfakes and the Future of Misinformation. The Brookings Institution. [3] Green, S. (2020). Deepfakes and Disinformation: A New Challenge for Democracies. The Council on Foreign Relations. [4] US Government Accountability Office. (2020). Deepfakes: Federal Efforts to Address National Security and Election Integrity Risks. [5] Kroll, J. (2020). Deepfakes: The malicious use of artificial intelligence in political disinformation. The Atlantic Council.
- The exploration of deepfakes in warfare by the US military raises ethical dilemmas, as the technology may contribute to a culture of deception, harming public trust and stability.
- The study of deepfakes is not limited to the military domain, extended conversations are underway in the field of personal-finance, business, education-and-self-development, and even the entertainment industry.
- The rapid advancements in artificial-intelligence, data-and-cloud-computing, and technology heighten the importance of ensuring that national security, finance, and lifestyle remain unaffected by the manipulated media.
- Investing in cybersecurity measures becomes essential to counteract the potential danger of adversaries utilizing deepfakes for espionage, sabotage, and information warfare.
- Advanced learning in schools and universities, such as 'entertainment studies' or 'computer science', plays a crucial role in nurturing a generation that is vigilant against misinformation and capable of navigating the digital landscape.
- Sports organizations and media networks can also partake in promoting media literacy, raising awareness about the capabilities and consequences of manipulated media to individuals and the communities.
- Cloud-based tools can efficiently predict and analyze weather patterns, but their potential misuse for propagating inaccurate or misleading information calls for appropriate safeguards and regulations.
- Existing algorithms for sports analytics and entertainment must be strengthened and refined to distinguish real data from doctored and manipulated media, securing their integrity and credibility.
- Consideration of deepfakes within the defense policy necessitates closer collaborations between the military industry and the technology sector for the development of authentication and detection systems.
- A broader discourse on the implications of deepfakes should encompass diverse perspectives, including tech experts, civil libertarians, journalists, and policymakers, to ensure inclusivity and evidence-based decision-making.
- Adhering to international regulations, promoting media literacy, and fostering transparent dialogues are momentous steps towards building a society free from manipulation and committed to the pursuit of truth.