Artificial Intelligence (AI) is an incredibly powerful technology that already is and will continue to influence, change and advance almost every aspect of our lives, businesses and governments.
On the other hand, AI can also be incredibly disruptive and raise a number of concerns, including workplace displacement, algorithmic bias, lack of data privacy or AI black box problems. Alongside these concerns, AI systems act autonomously in our world and make their own decisions. If we are to leverage the positive power of AI, we ought to do so responsibly. We must ensure that what we teach the technology really reflects our values. But which values should be considered and prioritized? Whose values? And how do we deal with unavoidable dilemmas?
Who is “we”?
In her podcast episode, Erika Ly shares her belief that this “we” is all of us. From governing authorities to software developers, down to the users themselves.
“All of us have some sort of individual responsibility role to play in our everyday life.”
She says that we each have a responsibility to think about what we are doing and whether or not we think that aligns with our values. Are we taking responsibility for our own actions? Ultimately, the goal is to develop technology solutions and enhance our human potential and experience, which means that as customers and users, we should also be a part of the feedback loop.
How?
Solid frameworks or binding guidance on the responsible use and development of AI have not been introduced so far. In its absence, ambiguity and misconceptions arise. Recently however, various stakeholders have come together in an effort to close this gap. The Partnership on Artificial Intelligence is a good example. This organization brings together academics, companies, organizations and other groups to better understand the impacts of AI and to study and develop best practices. Another example is the European Commissions’ recent publication of Ethics guidelines for trustworthy AI in the European Union.
What else?
The establishment of strong communities like the one directed by Erika, The Legal Forecast, is another way to responsibly advance technology and innovation. This is specifically impactful in the conservative and risk averse legal space.
“This mindset makes lawyers very good at what they do but not very good at changing themselves”.
The creative and entrepreneurially minded members of The Legal Forecast believe in the power of technology in the legal practice and access to justice. They work on finding the sweet spot in which lawyers can innovate and responsibly push the profession forwards, whilst still feeling comfortable.
Interested in hearing more about some of The Legal Forecast’s projects, how Erika got into the field of law and technology or about her work focused on indigenous people’s interaction with the digital world at the Berkman Klein Center? Give the inspiring episode below a listen!
The Wired Wig explains helps to plug in Digital and Technology Law into businesses. It explains technology concepts and informs how the Law could respond to innovation.
The podcast is available on Spotify, Google Podcasts, Apple Podcasts and other podcast providers found on Anchor.