How to reconcile innovation and responsibility in the age of generative AI?

The rise of generative artificial intelligence (GenAI) is disrupting businesses, navigating between promises of increased efficiency and major ethical challenges. Yet, while more than half of employees use it outside of any formal framework, few organizations have implemented suitable governance.

By Sandrine Charpentier

March 18, 2025

9 min

Share this article

On social networks

The explosion of GenAI is profoundly redefining working methods and business strategies:

58%

of employees use these technologies outside of a formal framework

Source: Salesforce

However, 87% of them report the absence of an internal policy governing their use (figures revealed by a Salesforce study conducted in November 2023).

The risks are multiple: algorithmic bias, data leaks, environmental impact... all issues that demand ethical regulation. Faced with these changes, some companies are taking the lead by establishing ethics committees and technological safeguards. But this awareness is still too cautious.

At a time when the European Union has drawn up a law on AI (entering into force on August 1, 2024) aiming to regulate these practices, companies face an imperative for innovation accompanied by increased responsibility towards their employees and their stakeholders.

AI: the risks of laissez-faire

In a context where nearly two-thirds of executives consider generative AI a lever for restructuring their work organization, according to a study by the consulting firm BCG (Boston Consulting Group) published in June 2024, the challenge is not just technical: it is fundamentally ethical.

The risks take several forms. The first facial recognition algorithms, for example, already illustrated how poorly designed systems could ignore certain populations such as Black people or women (to discover: the documentary "Coded Bias" which deciphers these biases and their consequences).

Today, biases threaten to perpetuate certain discriminations, whether they are :

  • availability bias: the tendency to judge based on information that is most easily accessible in our memory,
  • confirmation bias: the tendency to favor, search for, and interpret information in a way that confirms our pre-existing beliefs or hypotheses,
  • selection bias: a bias that occurs when the sample studied is not representative of the target population.

In parallel, the exploitation of low-cost labor continues in emerging countries like India or Kenya to classify, annotate, and document images and content from the Web to train the AI. As for the ecological footprint of data centers, it already represents 4.4% of the carbon footprint in France according to ADEME (the Agency for Ecological Transition). Proof that innovation is not disconnected from environmental and social issues.

Coded Bias - Netflix

Concrete examples abound. in April 2023, the Samsung group reported an incident where employees unintentionally disclosed confidential information to ChatGPT, risking the compromise of industrial secrets. Furthermore, ingenious queries launched by users allowed a Microsoft *chatbot* (conversational agent) to disseminate sensitive data. The compromised information included emails, phone numbers, and API keys, revealing flaws in the data protection of these systems. These examples demonstrate that the use of these technologies can, in the absence of adequate safeguards, lead to major information leaks.

AI framework initiatives

Conversely, Orange stands out for its proactive approach. The company has developed Dinootoo, its internal Generative AI toolkit, so that employees can learn about the potential of LLMs (*Large Language Models*) and adopt them simply and safely. By making it accessible to more than 50,000 employees, the group seeks to combine innovation from large models – OpenAI, Google, Anthropic, Mistral, Meta – with a reinforced security policy that guarantees that information shared by employees is not reused by external actors. This system is accompanied by an ethics board composed of independent experts, ensuring continuous reflection on data protection and fairness in the use of AI.

Dinootoo Chat is based on generative AI. Maintain your critical thinking and remember to check the answers provided (especially for risks of bias and hallucinations).
Dinootoo, the Chatbot created by Orange for its employees

Worldline, a major player in payments in France and Europe with 18,000 employees, has also evolved its governance of generative AI. Rather than blocking access to ChatGPT, its strategy consists of clearly framing the use of generative AI. Based on a series of rules communicated to everyone, access to these services is made possible and employees are encouraged to use them. In parallel, an awareness campaign aims to demystify the actual capabilities of generative AI.

These initiatives illustrate that a global approach is essential. It is not enough to deploy advanced technologies without adopting a solid algorithmic governance framework. This involves defining rules, practices, and mechanisms to control the use of algorithms and ensure ethical, transparent, and responsible operation of these technologies. This includes limiting bias, protecting personal data, and ensuring their compliance with current regulations.

Every step, from model design to data validation processes, must be planned to secure information and respect commitments in terms of social and environmental responsibility. The cost of responsibility – whether it involves investments in security or the implementation of rigorous audit processes – appears as a strategic investment to sustain innovation.

Concretely, how to engage in this approach?

Establish a regular algorithm audit

Companies can perform frequent checks to identify and correct potential biases in their AI models. This involves rigorous testing, independent evaluations, and adjusting datasets to ensure greater fairness and representativeness.

Strengthen the security of data used by AI

It is important to protect sensitive information by applying strict encryption and anonymization protocols. Among the identified best practices, limiting data access to authorized personnel only helps reduce the risk of leaks or misuse.

Integrate environmental and social criteria into AI development

To minimize the ecological footprint of AI systems, companies can prioritize more energy-efficient infrastructures and optimize the use of computing resources. It is also necessary to raise employee awareness about usage: in other words, use specialized software rather than systematically calling upon artificial intelligence. For example, a search engine... instead of ChatGPT. In parallel, particular attention must be paid to the working conditions of annotators and the social impacts of the deployed algorithms.

Ultimately, the integration of generative AI into companies must not be a frantic race towards productivity at any cost. It calls for an informed digital transformation, where technical performance is inseparable from respect for ethical and environmental values. In this context, it is becoming urgent for companies to establish an ethics committee capable of regulating the use of generative AI. This board, relying on independent expertise, will be the guarantor of responsible AI use, making it possible to combine innovation, protection of sensitive data, and preservation of resources. All while ensuring that technological advances benefit society as a whole.

References:

[Cover photo: Vitaly Gariev]