When it comes to ethics and sustainability, the fast-paced development of artificial intelligence needs strong guardrails.
The current competition-at-all-cost approach that pits companies against one another in a race for the next big AI milestone is impeding progress rather than furthering it.
OpenAI’s groundbreaking release of ChatGPT, and the quick follow up of Microsoft’s Bing and Google’s Bard, have created a modern AI arms race that has triggered doomsday prophecies from leading technologists, policy makers and the public.
AI thought leaders have been increasingly ringing the alarm bells, urging a slowdown to break this toxic cycle. Recently, some of the most prominent voices in the industry have called for an immediate six-month pause in AI systems training and even warned that unchecked AI progress could cause an existential risk of human extinction.
At Nokia, we believe that AI can be a positive force for society if, and only if, it is developed and implemented in a collaborative and ethical way. That means building a community that has common values and talking to each other about technologies that have wide reaching consequences.
Doing so would unleash all the positive potential of AI, such as democratizing access to healthcare and education, streamlining industrial operations, unlocking network efficiency, preserving environmental resources and creating a generally fairer, sustainable and more equal society.
We believe that responsible AI builds on six fundamental pillars. But to achieve that we need cooperation and that comes in the form of community, collaboration and regulation.
Why can’t we be friends?
We need to get together as industries before we release these powerful technologies. That starts with industry players working together to ensure the responsible creation of such systems. By sharing knowledge and best practices, we can create AI systems that are reliable, trustworthy and free of bias.
The development of these technologies is no easy feat and requires significant investments in research and development. That’s why pooling resources and knowledge is necessary to drive innovation forward and bring us closer to our goals. By showing our commitment to ethical and socially responsible practices, we can earn the trust of the wider public. This is crucial today, where concerns about the use of AI technologies run deep.
To do so, we need to take a “responsibility by design” approach. This involves identifying and mitigating risks before they become problematic. For instance, in the development and deployment of AI technologies, responsible design principles such as transparency, accountability and inclusivity can prevent the creation of biased or discriminatory systems. By prioritizing user privacy and data security, we can design systems that limit the collection and use of personal information to what is necessary and appropriate. We need to insist on such responsibility first, rather than last, so that we aren’t forced to later put out the fires we have created. These fundamental principles are at the center of the Nokia Bell Labs’ research.
For instance, we have examined the limitations of the prevailing method of checklists to maintain ethical guidelines in AI development. Instead, we advocate for a more interactive and user-friendly approach of prompt cards that employs a system called AI Design. This innovation has been co-developed by a diverse cohort of experts, including AI developers, engineers, regulators, business leaders and standardization professionals. It streamlines the ethical AI development process by providing developers with an array of best practices and techniques and prompts the developers to think imaginatively and consider conflicting demands such as fairness and privacy.
Our researchers exhibited this initiative in April at this year’s Human-Computer Interaction (CHI) conference in Hamburg. Collaborating with 12 globally renowned experts from universities and companies such as Microsoft, Google, Meta and IBM, we curated a groundbreaking session dedicated to responsible AI.
Our researchers also attended conferences that explored how ethics shaped AI development. Our team members presented at other events on Trustworthy and Responsible AI, where we exhibited our AI design tool, and Generative AI, exploring how it is changing the way we create, design, and interact with the world around us.
We also presented a paper at the ACM conference on fairness, accountability and transparency about how scientific studies were disproportionately based on research conducted in Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries, producing results that were atypical of the world’s larger population and may not accurately represent human behavior.
Creating such a community requires collaboration, meaning like minded companies must talk directly to each other to solve problems.
Nokia Bell Labs is hosting an ongoing series of seminars, featuring academic and industry experts who are leading the ethical AI movement and shaping future responsible AI systems. These seminars cover a diverse range of topics to breed collaboration, from fairness to explainability to accountability.
For instance, at his recent seminar Ricardo Baeza-Yates of Northeastern University demonstrated data and model biases in AI systems, while Michael Hind presented IBM’s AI Fairness 360 toolkit to help developers and engineers build trustworthy AI systems.
Vera Liao of Microsoft Research highlighted the need to make AI more explainable as end users typically do not read these explanations or engage with them carefully. Simone Stumpf of the University of Glasgow similarly stressed the human factor, saying such explanations needed to be sound while also avoiding overwhelming end users.
Michael Muller of IBM Research and Angelika Strohmayer of Northumbria University focused on uncertainty, emphasizing the importance of adding accountability in AI systems by keeping track of datasets used to train AI and enabling developers to retrospectively revisit past states when necessary.
There are two pivotal aspects in contemplating the regulation challenges we face.
First, it is imperative that we take the initiative to regulate ourselves or else we will invite the intervention of governments and other governing bodies. Succumbing to external regulation could impede the free expression of our innovative spirit and impinge upon the boundless possibilities of technological advancement.
Second, we must forge a unified front in establishing steadfast standards to reconcile the inevitable clashes between morality and technology. By fostering a collaborative environment rooted in shared values and principles, we can navigate the intricate landscape where ethical considerations intersect with technological progress. Only through collective efforts can we ensure that our advancements align harmoniously with the timeless fabric of our collective conscience.
The Tipping Point?
In Malcolm Gladwell’s 2000 bestseller The Tipping Point, he argues that for an idea to become endemic it requires three things: a deeply connected network where ideas can spread efficiently, an impactful “sticky” message and the right context and conditions.
When it comes to responsible AI, the context is right and the message that something needs to change is strong. What’s missing is the community of AI stakeholders to drive the adoption of these ideas from the ground up.
At Nokia, we believe in collaborating with partners, customers and industry players to accelerate innovation and progress.
If the AI world could come together as a community to pursue a common set of responsible AI principles, we could advance AI without suffering from the controversial pitfalls now dominating the news.
Only through forming this community, collaborating and creating regulation can we restore the guardrails that AI requires.