Responsible AI Founders: The next frontier in human-centric AI
Responsible AI Founders: The next frontier in human-centric AI
As AI technology grows more ubiquitous so does the growing awareness of the human and societal impacts of technology. The growing responsible artificial intelligence (AI) addresses the need for new ethical codes and standards to protect against potential risks and threats from this powerful new technology. From eliminating model bias to ensuring data privacy, incorporating responsible AI practices promises to be a key focus for many companies in 2021 and beyond.
The first AI ethical standards, developed between 2016 and 2019, neglected to address systemic injustice and ignored the question of problem framing. As a result, businesses are becoming more aware of the commercial and reputational risks posed by unchecked artificial intelligence. Gartner predicts that, by 2023, 75% of large organizations will hire AI behavior forensic, privacy, and customer trust specialists to reduce brand and reputation risk. This is further supported by a survey conducted by IBM Institute for Business Value in which 68% of business leaders agreed that customers will demand more explainability from AI in the next three years.
The second wave of AI ethics, which addressed fair machine learning (ML), or technical mechanisms that took into account how AI and its algorithms treated underrepresented communities and people of color. While it addressed fairness, bias, and discrimination, it lacked social context. The real question should not be what can we do with AI, but rather, how can we apply it to address human problems?
Now, a third wave of AI ethics — one that is less conceptual and more focused on providing guided support to machine learning developers, such as ML engineers — is emerging. For instance, the Defense Advanced Research Projects Agency (DARPA), part of the U.S. Department of Defense, has launched an Explainable AI program. The DARPA program identifies perceiving, learning, abstracting, and reasoning as defining elements of the third wave.
To learn more about Responsible AI today, we brought together a panel of three AI experts for a LinkedIn Live event, “Responsible AI Founders - the next frontier in human-centric AI.” We asked them about the production and deployment of AI models, regulation and policymaking, and how AI touches people personally.
Why explainable AI matters
One important aspect of Responsible AI is Explainable AI. That’s the ability to interpret AI data after the fact and to fully understand how decisions are made, which helps build trust among users and stakeholders.
One of our panelists, Rumman Chowdhury, the CEO and founder of Parity AI, said Explainable AI is important for AI companies to grow and scale. “It's the unknown unknowns, the unintended consequences, that have good stewards of data and algorithms and models concerned about the risks their models will introduce,“ she said. “So explainability is one factor that significantly helps.”
While explainability is one essential pillar of Responsible AI, Rumman emphasized that it’s not the only one. “You also need contextual understanding, mitigation techniques, and bias detection techniques, and those are all very different,” she said. “It’s great that explainable AI is being identified, but we need to broaden the conversation to think more about what responsibility means in total, thinking of the quantitative as well as qualitative interventions.”
Liz O’Sullivan, vice president of Responsible AI at Arthur, a platform that provides a dashboard view of a post-production AI model’s health, agreed. “Much of it comes down to the conversations and the ownership that has to take place within an organization,” she said. “The inclusion of affected communities and the humanities and other people who are experts in the kinds of issues people are trying to solve with increased automation and AI.”
Other pillars of responsible AI
Another of our panelists, Lofred Madzou, AI project lead at the World Economic Forum, pointed out that many companies use third-party AI systems that haven't been future-proofed and stress tested. They need a better understanding of the training data sets, he said, as well as the optimization function and the system’s inner functioning. “Because in brokering that system, you also broker risks associated with the system,” he added.
A company must build organizational frameworks for its specific context and risk, Lofred said. Companies also need a performance matrix they can assess to ensure their system is compliant with their mission and principles. He added that another important question companies should ask is: “How do we make sure employees have the incentive to do the right thing and use this [AI] tool, and how are they rewarded if they do the right thing?”
Rumman also pointed out that many data science tools help with AI fairness, detection, bias, and mitigation. She said it’s especially important to be careful about how fairness is measured because what's worse than “garbage in/garbage out” is a veneer of fairness or ethical monitoring.
It’s not enough to merely give the appearance of doing the right thing. “You think you have corrected a problem, but maybe you've done the incorrect thing, which is at best going to do nothing and at worst, could be more harmful than if you didn't do any mitigation at all,” she said.
Rumman predicted that there will never be a single solution to create a responsible AI system. But rather, companies will tackle problems within different stages of AI systems’ life cycles spanning from pre-production or post-deployment. Parity — the intersection of explainability, transparency, and responsibility — works on identifying pre-production risks in AI systems. “It helps you create a risk strategy and mitigation in production, which is very different from once you create your models and monitor that post-production,” she said.
She thinks there will be an evolution around how we approach responsible AI. “I think there will be different players playing very important roles in becoming specialized in a particular area of responsible AI related to the product development life cycle,” Rumman added. “So, in concept and development, and then in post-production deployment.”
How regulators and policymakers impact AI
The panelists agreed that, when it comes to the need for regulation, it is important for regulators and policymakers to be on the same page with those building and founding AI companies.
Lofred said, for example, that European Union and U.S. regulators have approved too many AI systems without being properly vetted. “How do you ensure that the system deployed live is consistent and compared to the set of expectations or legal requirements?” he asked.
He said he’s hopeful that more clearly defined policy guidelines will become the norm because many companies are trying not only to catch up but to improve in terms of liability, fairness, and transparency. “It’s really a great movement,” he said.
“My bet for the next 10 years is that the most competitive businesses will be Responsible AI-driven companies.” — Lofred Madzou, AI project lead at the World Economic Forum
Automated decision-making systems affect people’s lives
Sometimes there are automated AI forces working in the background that people are not aware of, such as when they apply for a job or loan. Liz noted that a biased hiring algorithm, or promoting people from within a company based on document extraction, can cause real harm.
Rumman recently testified at a New York City Council hearing on algorithmic bias and hiring tools. The bill would make it mandatory that people know they were subjected to an automated decision-making system and for those systems to undergo audits. “I think for the average person,” she said, “it would help them start to internalize how ubiquitous all these systems are.”
The New York bill underscored that there's not actually a clear definition of what an audit consists of. If the bill passes, Rumman said, the city will have to determine what an audit is and how companies must collaborate with the government on them. “That’s where we, as startups, should be a lot more clever and creative with how we are trying to work with the government,” she said. “Not in a creepy, regulatory capture kind of way, but in a ‘we need to share our best practices’ way.”
Liz added that many “scarier or more harmful applications of AI tend to be opaque” and aren’t realized until someone files a Freedom of Information Act (FOIA) request. She said that’s how predictive policing — the use of analytics to identify potential criminal activity — was discovered, for instance. “Which we know is just a license to discriminate based on historical data that reflects our country’s discriminatory past,” she said.
Predictive policing and fraud detection that generates false positives can put people in jeopardy of losing their benefits. An example of this would be if the government started to use automated analytics to, say, allocate welfare benefits.
“Maybe people will never even know there's AI behind the system,” Liz said. “That actually restricts their ability to take legal action to defend themselves if they're falsely accused of something. There's just a ton of opportunity for these systems to crack. And that's why we need, at least as a very beginning point, for Responsible AI to alert the users that AI is out there.”
Liz added that there are many ethical considerations about AI that still need to be discussed and decisions that need to be made carefully. But she points out some of the positives. “Drug discovery, and personalized medicine, these things wouldn’t be possible without AI,” she said. “And it’s really accelerating due to the constraints we had this year and the newfound appetite to have AI playing a role. It will be interesting to see how that develops.”
Our panelists agreed that an understanding of AI and its risk strategy, explainability, and other ethical considerations is crucial. They also concurred that regulators and policymakers need to work alongside AI program developers and that we need to take great care with automated decision-making systems.
As more investors at Lux Capital, Omidyar Network, and Plug and Plug are planning on investing in responsible AI in 2021, our panelists remain hopeful for more opportunities for market commercialization and interdisciplinary collaborations between the private sector and government for a more human-centric application of AI technology on society.
Lofred believes the industry is moving in the right direction: “My bet for the next 10 years is that the most competitive businesses will be Responsible AI-driven companies.”