In recent years, the ascent of artificial intelligence (AI) has paved the way for remarkable innovations, one of which is the creation of digital humans or virtual beings. These AI-driven entities, endowed with human-like attributes and capabilities, are progressively finding their place in various sectors including entertainment, customer service, and education. However, as with any burgeoning technology, the rise of digital humans beckons a scrutiny of the regulatory landscape to ensure its responsible and safe deployment. A significant stride towards this end is the recent Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence signed by U.S. President Joe Biden on October 30, 2023.
The Executive Order underscores the U.S. government’s resolve to harness the promise of AI while managing its risks. It lays down a framework that includes new standards for AI safety and security, protection of privacy, advancement of equity and civil rights, promotion of innovation and competition, and the advancement of American leadership globally. These directives, though broad in scope, have particular relevance to the domain of digital humans.
A foremost consideration is the establishment of new standards for AI safety and security. The order mandates rigorous testing and validation of powerful AI systems, a directive that extends to the AI technologies underpinning digital humans. The goal is to ensure these virtual beings operate within defined safety parameters, thereby mitigating risks to users and the public. Furthermore, the transparency and disclosure requirements stipulated in the order could lead to an industry-wide standardization, influencing the development, deployment, and operation of digital humans in a transparent manner.
Privacy protection is another pivotal aspect of the order that could bear on digital humans. Given that digital humans often interact with individuals and may collect personal information, stricter regulations on data collection and usage are likely to be necessitated. This, in turn, could prompt developers to embed stronger data protection measures within the AI systems powering digital humans.
The advancement of equity and civil rights, as emphasized in the order, could foster an ethical framework for designing and operating digital humans. Ensuring that these virtual beings are free from biases and respect individuals’ rights could lead to their more ethical use, promoting inclusivity and fairness.
The order’s stress on promoting innovation and competition is a harbinger of a conducive environment for the further development of digital humans. By fostering a competitive landscape, the order could drive research and development in this field, potentially leading to advancements in the realism, capabilities, and applications of digital humans.
Moreover, the guidance on content authentication and watermarking could have ramifications on how digital humans are presented and identified on digital platforms. This could potentially facilitate the distinguishing of virtual beings from real humans, a measure that is crucial for fostering trust and authenticity in digital interactions.
Lastly, the order’s aim to advance American leadership in AI globally could also extend to the field of digital humans. By positioning American companies and researchers at the forefront of innovation in this domain, the order could foster a global leadership role for the U.S. in the burgeoning field of digital humans.
In conclusion, the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is a monumental step towards creating a balanced regulatory framework for AI technologies, including digital humans. By addressing critical aspects such as safety, security, privacy, and ethical considerations, the order lays a robust foundation for the responsible and ethical evolution of digital humans, thereby ensuring that the technology is harnessed for the greater good while mitigating associated risks.