Brown computer scientist aims to protect people in an age of artificial intelligence
Brown University
As data-driven technologies transform the world and artificial intelligence raises questions about bias, privacy and transparency, Suresh Venkatasubramanian is offering his expertise to help create guardrails to ensure that technologies are developed and deployed responsibly.
“We
need to protect the American people and make sure that technology is used in
ways that reinforce our highest values,” said Venkatasubramanian, a professor
of computer science and data science at Brown University.
On
the heels of a recently concluded 15-month appointment as an advisor to the White
House Office of Science and Technology Policy, Venkatasubramanian returned to
Washington, D.C., on Tuesday, Oct. 4, for the unveiling of “A Blueprint for an
AI Bill of Rights: Making Automated Systems Work for the American People,”
during a ceremony at the White House.
Venkatasubramanian said the blueprint represents the culmination of 14 months of research and collaboration led by the Office of Science and Technology Policy with partners across the federal government, academia, civil society, the private sector and communities around the country. That collaboration informed the development of the first-ever national guidance focused on the use and deployment of automated technologies that have the potential to impact people’s rights, opportunities and access to services.
“As
a nation, we’ve done this before with consumer privacy, and the Patient’s Bill
of Rights, for example,” Venkatasubramanian said. “Civil rights and civil
liberties are a sacred institution in our country… Every major country and
blocs of countries in the world are thinking about what it takes to govern
automated systems and account for bias, but the U.S. has not, so this is
something that was a long time coming.”
Venkatasubramanian
is a researcher and educator immersed in the development and impact of
technology and artificial intelligence. The opportunity to advise nationally
aligned not only with his expertise, but also his concerns about the ethical
use of technology and bias embedded in the design of some AI tools, which can
infuse past prejudice and perpetuate discrimination.
Whether you like it or not,
the technology is here, and it’s already affecting everything that shapes you.
... If we don’t pay attention to this, the technology will be driving how we
live as a society rather than society making technology that helps us flourish
and be our true selves.
Q: Why are
“guardrails” around the development and use of AI an issue of national
importance?
We
recognize that there are a lot of potential benefits from automation and
data-driven technology — all these promises of what could be. But we also see
that the promises often tend not to pay out. For example, we can try to build
an AI system to make sure we can’t discriminate in the criminal justice system,
but systems that suck up data from previous arrests are irrevocably tainted by
the history of racial injustice in the criminal justice system. And then
implemented at scale, this taint spreads. All data that’s fed into a system is
just going to amplify biases in the data, unless there are rigorous and
carefully designed guardrails.
These
technological systems impact our civil rights and civil liberties with respect
to everything: credit, the opportunity to get approved for a mortgage and own
land, child welfare, access to benefits, getting hired for jobs — all
opportunities for advancement. Where we put these systems in place, we need to
make sure they’re consistent with the values we believe they should have, and
that they’re built in ways that are transparent and accountable to the public.
It’s not something we can slap on after the fact.
Q: As a computer
scientist, why are you concerned about the impact of technology on society?
I
have been studying these issues for almost a decade, thinking about what’s
coming next and what the world will look like when algorithms are ubiquitous.
Ten years ago, one concern I thought we were likely to have was whether we can
trust these systems to work the way they’re supposed to, and how we know these
systems are accountable to the public and our representatives.
Whether
you like it or not, the technology is here, and it’s already affecting
everything that shapes you. You are — without your knowledge — adapting how you
live and function to make yourself more readable to technology. You are making
yourself machine-readable, rather than making machines human-readable. If we
don’t pay attention to this, the technology will be driving how we live as a
society rather than society making technology that helps us flourish and be our
true selves. I don’t like to frighten people, but it’s true — and it’s
important.
Q: Is AI a good thing
or a bad thing?
Neither,
really. It’s not the technology that’s good or bad, AI or not. It’s the impact
— the harms — that we should be concerned about. An Excel spreadsheet that
produces a score that confines someone to detention before standing trial is as
bad as a sophisticated AI system that does the same thing. And a deep learning
algorithm that can help with improving crop yields is amazing and wonderful.
That’s why the AI Bill of Rights focuses on impact — on people’s rights,
opportunities and access to services — rather than the technology itself, which
changes and evolves rapidly.
Q: What will the
Blueprint for an AI Bill of Rights do?
Think
about prescription drugs, for example. You don’t have to worry that the drug
you’re taking has not been tested, because the FDA won’t let it come onto the
market until it’s gone through rigorous testing. Similarly, we’re confident
that our cars will work and that regular recalls happen whenever the National
Highway Traffic Safety Administration discovers a problem; and we’re confident
that our planes work and that every new kind of jet goes through rigorous
testing before being flown. We have many examples to draw from where we don’t
let new technology be used on people without checking it first. We can look to
that as a guide for what we think is important, and technology affects
everyone.
This
AI Bill of Rights is a blueprint that goes beyond principles. It provides
actionable advice to developers, to civil society, to advocates, to
corporations, to local governments and to state governments. There are various
levers to advance it: regulation, industry practices, guidance on what
governments will or won’t build. There is no silver bullet here, but all the
levers are within reach. It will take the whole of society to advance this work.
Q: How did your
experience as a White House advisor impact you?
It
was life altering. My brain now works in ways I cannot — and don’t want to —
undo. I’m constantly thinking about the bridges between research and
innovation, society and policy. As a country and as researchers, we’re still
coming to terms with this. For a long time, we’ve thought of technology as a
thing we use to make life better. But we’re not as familiar with technology as
a thing that changes our world. Trying to make policy for an entire
country — and in some ways, the entire world, because the U.S. is a leader — is
challenging because there are so many competing interests that you must
balance.
In
my time in government, I was impressed by how complex and subtly these issues
unfold in different domains — what makes sense when thinking about health
diagnostic tools doesn’t really work if you’re thinking about tools used in the
courtroom. I have a deeper appreciation for how many dedicated people there are
within government who want to make a difference and need help and bandwidth to
do it.
One
thing that I’ve realized in the years I’ve spent working in policy spaces is
that it’s critical to help policymakers understand that technology is not a
black box — it’s malleable and evolving, and it helps shape policy in ways that
we might not expect. Technology design choices are policy
choices, in so many ways. Coming to terms with how tech and policy influence
each other requires a lot of education — both for technologists and for
policymakers.
Q: How will this
impact your work at Brown?
I
cannot think of a better place than Brown that embodies the values of
transdisciplinarity and scholarship in service of the public good. In my years
studying the impact of data-driven technology on people and communities, I’ve
learned the critical importance of bringing a variety of perspectives to bear
on any specific problem. Technologists cannot alone solve problems caused by
the clash of tech and society, but neither can any other group of thinkers and
actors.
The
Brown campus ethos is incredibly cooperative, and it’s deeply embedded with a
commitment and passion to public service among the students, faculty and
administration. As my colleagues and I at the Data Science Initiative work
toward building a new center that will focus on tech responsibility, I’m
focused on the mission of redefining how we design technology and teach it to
center the needs, problems and aspirations of all — especially those that
technology has left behind.
I’m
convinced that we have the creativity and the tools to build tech that helps us
flourish and helps all of us benefit from the advancements in tech. In order to
do this, we have to bring together all the amazing ideas from engineering,
public health, medicine, the social sciences, the humanities, policy leaders
and technologists. I’m committed to encouraging and contributing to that
ongoing vibrant dialog on campus and creating a transdisciplinary home where we
can come together to solve problems and solve them well.