Eurogroup Consulting publishes a new study on organizational ethics in the digital age: towards responsible artificial intelligence. In it, our experts analyze the various issues surrounding artificial intelligence and risk management.
Artificial intelligence is shaping up to be one of the major technological challenges of the coming decades. But it also represents a geopolitical challenge, a vector of economic and societal transformation. Despite this, its rapid development raises many questions and concerns. What's more, a number of high-profile incidents have tarnished its image of pure progress.
WHY CONSIDER THE ETHICAL CHALLENGES OF IA?
Organizations, both public and private, are increasingly turning to artificial intelligence. Artificial intelligence promises gains in productivity and efficiency, as it replaces the human being in a number of tasks, both in terms of quantity and complexity, enabling the human being to free himself or herself from these tasks. It is also the basis for the production of new, high value-added services.
What's the point of AI ethics?
The explosion in the volume of data produced and exploited opens the way to a wide variety of uses for artificial intelligence. But this boom also raises a number of fearsparticularly of an ethical nature.
Ensuring that artificial intelligence behaves responsibly is therefore necessary for several reasons:
- Creating value while strengthening user confidence in artificial intelligence,
- Prevent reputational risks for the organization,
- Reduce legal risks
Ethical use to prevent aberrations
These computer programs can adopt unintended practices. This can lead to discrimination or other behaviors contrary to accepted moral principles.
Artificial intelligence is based on algorithms designed and developed by humans. The latter are not free from bias, whether conscious or unconscious. For example, an AI solution that is supposed to use statistical analysis to predict crime will turn out to discriminate against the black population, while another intelligent discussion program will make racist and negationist comments on a social network.
Ethical use as a source of value
Cases of unintended, value-critical behavior remind us of an essential fact about artificial intelligence: a computer program, whatever it may be, is the product of a social, organizational and cultural environment.
So it's not just a technical issue, it's a real one. a challenge of organization and governancewhich concerns all user structures.
Companies and organizations must therefore take up the subject of algorithm ethics. In this way, they can ensure that their conformity with their own valuesbut also those of the company.
And while they are aware of the issues involved, there is still room for improvement in terms of maturity.
IA ETHICAL FRAMEWORK: A RECENT PREOCCUPATION FOR ORGANIZATIONS
The perception of ethical issues in organizations
In general, the level of knowledge about artificial intelligence is relatively low. Even if AI is perceived by decision-makers as an opportunity to improve, few players understand how it works and measure its impact. associated risks. Confined to a technical issue, AI and its possible biases are often the domain of developers and technical experts.
- This relative invisibility can also be felt in ethics committees, where they exist. They rarely take up technological issues, due to an underestimation of the criticality of the risk, and a lack of expertise.
- When organizations recognize the importance of training, only those employees directly involved in the design or use of tools employing AI are directly targeted.
A lack of organizational maturity
CIOs are increasingly aware of the ethical risks associated with the use of artificial intelligence. However, this concern rarely translates into the implementation of organizational and technical measures designed to reduce these risks. Ethical quality is hardly ever a selection criterion when purchasing AI solutions.
Too little communication
Faced with these challenges, our study puts forward recommendations for the responsible use of artificial intelligence.
These cover :
- Policy and strategy,
- Organization,
- Designing algorithmic solutions,
- Communication and the ecosystem
Technical risk reduction systems in their infancy
When the solution is developed in-house, data quality is a major concern. Ethical biases are largely linked to the lack of diversity and quality in the data used. Yet efforts to improve data quality often ignore the ethical dimension. This is mainly due to a lack of awareness, or an underestimation of the importance of the issue.
What's more, there's little in the way of internal reflection on the subject, or within professional circles. This greatly limits the possibility of sharing best practices in the field.
It therefore seems necessary to tackle this issue at the highest level of the organization. To do so, it is essential to recognize that it is both a major risk factor and a source of potential value.
This involves the widest possible consultation to examine the organization's values, and formalize the ethical guidelines applicable to artificial intelligence in a Charter.
The subject should be addressed by an appropriate body, either an existing ethics committee, or a dedicated committee. The composition of the committee should ensure a diversity of expertise: ethics manager, developer, project manager, legal expert, etc.
TOWARDS MORE ETHICAL AND RESPONSIBLE ARTIFICIAL INTELLIGENCE
The diversity of our teams
At ISD level, diversity in development teams is essential to promote a more mixed and representative AI. This diversity must be accompanied by the promotion of " ethics by designwhere ethics would be approached as a structuring and imperative element. If the organization does not develop its own solutions, but rather acquires them, it is advisable to make suppliers aware of their responsibilities in this area.
Thinking about the ethics of artificial intelligence is no longer a luxury, but a necessity. Given the stakes involved, it is essential to implement an AI ethics management system, both to align usage with the values of the organization and society, and to offer additional value to consumers who are increasingly concerned about the externalities, both positive and negative, attached to their consumption.
Training and technology watch
As the field of AI is progressing very rapidly, it is essential to maintain knowledge and practices. This means keeping an eye on regulations and technology, and revising practices as often as necessary.
As AI algorithms improve with the volume of data processed, their behavior is likely to evolve accordingly, justifying the implementation of monitoring and evaluation throughout their lifecycle. Algorithm ethics must therefore be considered at the design stage, as well as in production.
Widespread communication
Responsible AI needs to be communicated by organizations. This must be aimed both at employees, to raise awareness, and at customers, to promote responsible use. Especially at a time when the technology is the subject of many fantasies.
OUR RECOMMENDATIONS
Policy and strategy
- Consult customers and staff on the ethical challenges of artificial intelligence
- Educate the organization's top management on the ethical challenges of artificial intelligence
- Build an ethical charter by defining the ethical guidelines applicable to artificial intelligence
- Raising awareness among all staff involved in projects using algorithms
- Implement a purchasing policy incorporating ethical requirements
Designing algorithmic solutions
- Include an "ethics by design" requirement right from the start of solution development
- Data quality prior to use in AI design and training
- Auditing data throughout the product lifecycle
- Monitor the ethical compliance of solutions over time
Organization
- Addressing AI ethics in a governance body
- Promoting culturally mixed teams of developers
- Empowering product developers
- Maintain the organization's knowledge and practices at the state of the art
Communication & ecosystem
- Integrating AI ethics into organizational communication
- Introduce a transparency and explicability mechanism for algorithms
- Encourage discussion of AI ethics in professional circles
STUDY AUTHORS
- Hippolyte ANCELIN, Senior Consultant
- Emilie KUHLMANN, Consultant
- Adeline TARAVELLA, Director
Contributors
- Pierre-Antoine PONTOIZEAU, Director
- Sieglin STEVENS, Supervising Senior
Foreword
Anne-Laure NOAT, Partner in charge of DAS Society and Responsible Economy (SER)
Grégoire VIRAT, Partner in charge of the Digital SBA
We are convinced that top management needs to get to grips with the subject, and set up a management system for the ethics of artificial intelligence. In order to protect the organization by aligning its uses of AI with its own values and those of society. And above all, to offer additional value to consumers who are increasingly concerned about their consumption.