Kenny Chen speaking on the early stages of the AI Commons at the AI for Good Summit 2018 (in Geneva, Switzerland). Photo Credit: P.A.R.T.
“‘You don’t need to be a rocket scientist or getting a Ph.D. in AI to still grapple with the issues here’”
From placing an Amazon order through Alexa to smart stoplights and the automation of hiring technologies, the advancement of artificial intelligence has an effect on everyone’s lives, whether we realize it or not.
“Everyone has not only the opportunity, but a responsibility to pay attention and to seek to educate themselves and understand the dynamics of how AI is changing…their lives, their industries, their communities, and their country,” said Kenny Chen, executive director of the Partnership to Advance Responsible Technology.
PART’s mission is to create a set of guidelines for responsible tech and build bridges between all the industries working on AI technology to ensure everyone has a thorough understanding of what’s being developed.“We’re not looking to reinvent the wheel,” Chen said, “but more so to fill in a lot of those critical gaps that we see, and help empower leadership and institutions on the ground to step up to that opportunity.”
In PART’s view, “responsible tech” means a few different things: preparing the workforce for the economic development implications of the rise in AI technology; spreading awareness of privacy and security concerns; and keeping a spotlight on social inequities that might pop up once decision-making is handed off to software. So while AI in cybersecurity or retail can make our lives simpler, the systems learn from people—which means they’re flawed. Amazon, for example, abandoned a program that scans resumes and sorts candidates after discovering the algorithm was biased against women, eliminating them or downgrading them from consideration.
By starting a dialogue around bias, privilege, and equity in AI, PART believes these issues can be avoided in the future, creating tools that help everyone equally.
With a background in exploring technology and innovation from governments, nonprofits, and startups, Chen started to see the impact AI could have on all of these communities about three years ago: “2016 seemed to be a major inflection point for much of the world when it came to recognizing how urgent and impactful AI would ultimately be.”
Chen was curious who, if anyone, was governing it.
“Our first thought was something like has to already exist,” said Lance Lindauer, PART’s business director. “And sure enough, it doesn’t. Not just locally, but there isn’t a footprint nationally or internationally. There’s no centralized voice talking about building ethical frameworks for technology development.”
Lindauer’s background in policy lent itself well to a larger discussion on the implications of AI. “I understand policy and its business,” he said. “It’s the same mechanization to build out any policy—it’s just different inputs, different people involved, and different stakeholders.”
After Chen and Lindauer recognized the need, they brought in talent from across the governmental, technology, arts, and legal spaces to advise them. “Each member of our board has their own unique approaches to engaging in the topic,” Chen said. Board members range from experts in law and technology to cybersecurity, ethics, and inclusivity, including Michael Skirpan, executive director of Community Forge and founder of ethically centered technology consultancy Probable Models; and Alka Patel, inaugural deputy director of the Risk and Regulatory Services Innovation Center at Carnegie Mellon University.
Since its inception last year, PART’s team has been weaving a web of connections locally, nationally, and internationally across the AI community.
Locally, PART launched PGH.AI, a “platform within the city of Pittsburgh that makes the conversation around AI as accessible as possible to people,” Chen said. “We’re always looking for ways to bring workshops and discussions to neighborhoods and communities where people might be at the greatest risk of being displaced or impacted, and just starting that conversation with accessibility and transparency.” Part education, part ethical discussion, PGH.AI aims to make the city a leader in using AI for good.
AI Triangle stakeholders meeting at MIT’s CSAIL building in Boston on March 25, 2019. Photo Credit: P.A.R.T.
On a national level, PART is working with the Pittsburgh Technology Council to create the AI Triangle—a partnership among Boston, Montreal, and Pittsburgh. “The idea being, let’s create an economic and research supercluster around ethical AI between our three cities,” Lindauer said.
Internationally, PART has been involved with the UN AI for Good initiative since its inception. PART also participates in the AI Commons, a global platform that helps scale and replicate AI approaches around the world. PART manages Pittsburgh’s role as a “Living Lab” alongside cities including Paris, Hong Kong, and Mumbai.
And this is all in its first 18 months. At each level of involvement, the team shares and disseminates knowledge, leading to white papers and resources for the community. As it grows, PART wants to become a resource for those intimately involved in the AI community—and newbies. PART plans to engage more with not only experts in AI but also the general population, which can be deeply affected by artificial intelligence.
While culturally, we’re still hung up on AI appearing as a Terminator-style robot, the reality is, AI is already here, integrated into our day-to-day lives. The more we know about it, the more we can drive the development of technology toward a common good. “Don’t be that guy with your head in the sand,” Lindauer said. “Recognize AI’s power, respect its potential, educate yourself, and ride the positive waves of its potential.”
“You don’t need to be a rocket scientist or getting a Ph.D. in AI to still grapple with the issues here,” Chen said. “Some of the most valuable conversations about this are going to happen at dinner tables or between friends at a bar.”