The ethics of AI-driven character creation

The ethics of AI-driven character creation.png

As the artificial intelligence field grows, so too has recognition of some of the more nefarious consequences the technology brings with it, including algorithmic bias, privacy and surveillance concerns, and transparency issues. Due to their potential for propagating misinformation, "deep fake" videos have also created anxiety around deciphering what's real and what's not.

But the practice of developing a standard set of ethics around AI is still in its infancy. That's particularly true when it comes to creating and interacting with digital characters animated by artificial intelligence, whether they are bots, digital assistants, or avatars.

One reason for this divide is that ethical concerns typically emerge apace of technology adoption. AI-driven characters in gaming, popular entertainment, and industries like law, education, and medicine have only just begun to infiltrate our everyday experiences.

Today these creations are mostly created and understood in positive terms: They are designed to make life more interesting, if not more convenient. By and large, industry experts are optimistic about their potential for creating a new paradigm of human-machine interaction in media, health, education, customer service, and commerce.

Every maker and user of AI characters must know from the start that any tech made with good intentions could also potentially be misused. As the future of AI brings forth an authenticity that blurs our understanding of machine and human interactions, more questions and ethical concerns will emerge.

We spoke with several people immersed in these conversations. From their perspectives, a framework for approaching the ethics of AI-driven characters must be both multidisciplinary and multi-level. And it should be first rooted in an understanding of the ethical dimensions at play.

Education and awareness

According to many experts in the field, AI-driven characters will be widely deployed across various sectors within two to five years. It's now obvious that bias may seep in during the development of these characters, and that certain communities may access them more readily than others.

Laura Montoya is founder and executive director of Accel.AI, a non-profit focused on lowering the barriers to entry in engineering artificial intelligence, as well as co-founder of the Latinx in AI Coalition. She says implicit bias in training AI-driven characters is a concern "not only in the short term for addressing how to work with these artificial intelligence algorithms, but also how to really address some of the social biases that are being perpetuated... So the issue isn't necessarily the technology in that case, but then more specifically, how we interact with each other as human beings."

That's why the onus is on the people advancing this technology to not just be equipped to tackle these issues, but to have a range of experiences that can help prevent such bias.

"We are in the privileged position of being able to see what's coming," according to Justin Hendrix, the executive director at the NYC Media Lab.

At conferences, summits, forums and other meetings (including one Samsung NEXT and Hendrix organized), technologists, developers, and others invested in AI's implications are trying to answer some of these questions, hash out their concerns, address their own biases and stereotypes, and consider whether and how the public understands this technology.

The technical challenge of acquiring enough data to train an AI, for example, blurs into ethical concerns around how much information can be collected from users, who will be represented in that data set, and what biases might be present in any particular sample. Also: What kinds of data protections do we need in a world where real-time data collection is at play?

Protecting data is one of top concerns when considering both the present and future of AI. This is especially true for communities living in difficult socioeconomic circumstances, and whose information and data is interacting with state-sponsored regulatory systems they may have little to no control over, says Christiaan van Veen, a senior advisor to the U.N. Special Rapporteur on Extreme Poverty and Human Rights who is currently leading the U.N.'s work on the implications of new technologies, such as AI on human rights.

Certain algorithms currently in use within the welfare system or criminal justice system, for example, are not considered artificial intelligence in the literal sense. But their ethical questions are similar and will evolve alongside the technology: Who are you giving your data to, how might that impact your future, and to what extent does that amount to surveillance?

“The primary problem is that a lot of those systems are built without there being any, and sometime very little, public debate about them," says van Veen. “And the reason for that is because the development of these systems is highly technical... But beyond that, building those systems in government is often not seen as something that there should be a legislative and wider public debate about. After all, these are just internal administrative processes."

Public debate & private understanding

When it comes to moral decision-making and the ethics of new technology — whether it's intended for government, industry or popular culture — what is necessary is a public discussion.

For example, a recent report from the MIT Media Lab on the trolley problem, as it's applied to autonomous vehicles, concluded the following: "Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them."

Usually, though, a tipping point precedes this societal-level discussion. “Things only tend to become a concern of the mass consciousness when they either impact a lot of people or they impact really famous or powerful people," points out Spatial's head of business Jacob Loewenstein.

In other words, people may not start to care until AI has already gone too far. For example, research and software that academics like computer scientist Siwei Lyu at the University of Albany are pioneering to combat deep fake videos came about as a result of these videos first surfacing.

This catch-22, alongside the proprietary nature of AI, is why insiders are engaging more and more with these ethical questions: They must create standards for themselves that can hold up against an eventual public debate, like the firm understanding that deep fake videos will emerge, and that would be bad for society.

That also means the industry needs to mature, according to Anezka Sebek, an associate professor of media design at The New School. “We have yet to really define a taxonomy of understanding around AI," she says.

Regulation across sectors

The next step in this framework would require some kind of regulation — which, from a government or state-sponsored level, is difficult. The pace at which any technology emerges is usually too fast for a piece of legislation. Furthermore, regulation from government typically carries the potential of intimidating innovators.

Companies historically have not been great at monitoring themselves. But the industry is currently instituting various partnerships and protocols as a step toward a kind of internal self-regulation.

The Institute of Electrical and Electronics Engineers, for example, has established a certification program for artificial intelligence geared toward setting standards for products and services in terms of both functionality and ethical standards. The Association for Computing Machinery has also come out with a code of ethics and professional conduct.

Companies and organizations in the for-profit tech industry, academia, research and nonprofit worlds — including Amazon, Facebook, the ACLU and the Electronic Frontier Foundation — have also organized under the Partnership for AI initiative to share best practices, research and dialogue on AI, including studies and discussions on bias and safety outcomes.

Santa Clara University's Markkula Center on Applied Ethics, located in the heart of Silicon Valley, offers teaching materials for tech companies to consider ethics in their tech practice.

Other groups like the AI Now Institute at NYU, which invests in studying the social implications of AI, are organizing workshops and symposiums for idea exchange, and are producing reports and policy toolkits. These are designed both for developers to self-regulate and for people working within different sectors like law and government to stay informed.

Different industries may present their own standards of ethics for these characters. What is appropriate in one context may not be in another, according to Hendrix. "Clearly, if I am talking to an entity that I may believe to be a doctor, there are probably very different ethical considerations than if I am talking to an entity that is portraying itself as if it is something that is going to entertain me," he says. "And then there is a general question of what types of disclosures should there be."

Should a bot using natural language processing to book an appointment on behalf of a person disclose that it's not a real person, for example?

Brian Green, the director of technology ethics at the Markkula Center on Applied Ethics at Santa Clara University, believes the answer to this is yes. He calls it “the truthfulness aspect," which then governs other actions. “Based on that, you should figure out — or at least be careful — with how you're interacting with it in terms of forming habits and how you treat characters, whether they are human or artificial."

What this amounts to is less a mass approach to ethics around this technology, and more of an individual understanding of how human beings understand their own nature — that is, to understand how they will then interact with the technology and the implications of that.

As a result, it helps to anticipate a societal-level response. Because not all humans are well-intentioned, government research agencies like Darpa, through its Media Forensics program, are already invested in investigating fakery and funding research into developing technologies to authenticate the integrity of certain images and videos. As AI characters become more common, the fact that certain individuals may corrupt these characters is a given.

But you can't count on any one level to come up with all the answers to addressing these ethical conundrums.

“If you ask for individuals to be the level that make the right decision, there's going to be a certain percentage of individuals who are not going to be able to make the right decision based on their lack of knowledge, or other circumstances they're under," says Green. “And we can't purely rely on corporations. And we can't clearly rely on government either. But if we try to set something up at each one of those levels, then hopefully, between the three of those, we knock out most of the edge cases that are bad and cover most of the center territory."

Previous
Previous

Why we invested in Healthy.io

Next
Next

Why we invested in BlazingDB