Artificial intelligence shapes everything from the music we hear, to the web pages and news stories suggested to us, to whether taxis are nearby, to the way our cars drive, to the way we work. So how AI develops is more than a computer science issue. It’s an issue for humans to grapple with.
Now an initiative between the MIT Media Lab and the Berkman Klein Center for Internet and Society at Harvard University seeks to ensure AI develops for the public good. It will collaborate on unbiased, evidence-based work on AI across disciplines and sectors, says Albert Ibargüen, president and chief executive officer of the Knight Foundation, which is one of the effort’s funders.
“Even when we don’t know it, artificial intelligence affects virtually every aspect of our modern lives. Technology and commerce will ensure it will impact every society on earth,” Ibargüen says. “Yet, for something so influential, there’s an odd assumption that artificial intelligence agents and machine learning, which enable computers to make decisions like humans and for humans, is a neutral process.
“It’s not,” he says.
Pepper is one of the new generation of engaging and friendly robots created to communicate with humans. Image: Pixabay
Artificial intelligence’s rapid development brings many tough challenges, says Joi Ito, director of the MIT Media Lab.
“For example, how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society?” he asks. “How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”
Developing AI needs to be a joint effort that includes computer scientists, engineers, social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers, he says.
The Ethics and Governance of Artificial Intelligence Fund—the initiative’s formal name—will address the challenges of artificial intelligence from multiple perspectives intends to bridge the gap between the humanities, the social sciences, and computing, Ito adds.
The thread running through these otherwise-disparate phenomenon, is a shift of reasoning and judgment away from people, says Jonathan Zittrain, cofounder of the Berkman Klein Center and a professor of law and computer science at Harvard.
“Sometimes that's good, as it can free us up for other pursuits and for deeper undertakings,” he adds. “And sometimes it’s profoundly worrisome, as it decouples big decisions from human understanding and accountability. A lot of our work in this area will be to identify and cultivate technologies and practices that promote human autonomy and dignity rather than diminish it.”
Specifically, the initiative intends to carry out the following items and to help answer the questions that arise from them:
- Communicate complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
- Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
- Advance accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
- Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
- Expand the conversation: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?
The two institutes have experience crossing disciplines. The Media Lab developed the Moral Machine platform, which has collected 2.5 million responses from people about whether they think autonomous vehicles should choose to save passengers or pedestrians in a crisis situation. The Harvard center has been an incubation center for the Creative Commons and Digital Public Library of America.
The Ethics and Governance of Artificial Intelligence Fund will be housed at The Miami Foundation in Miami.
Jean Thilmany and independent writer.
Even when we don’t know it, artificial intelligence affects virtually every aspect of our modern lives. Technology and commerce will ensure it will impact every society on earth.
Albert Ibargüen, CEO, Knight Foundation