The rise of AI ethics experts and the creation of a culture of corporate ethics innovation

To address the many ethical concerns surrounding generative AI, including privacy, bias, and misinformation, many technology companies have begun working with AI ethicists as employees or consultants. These experts are responsible for managing how companies introduce AI into their products, services, and workflows.

ⓒ Getty Images Bank

Bart Willemson, vice president and analyst at Gartner, says it is more effective to have a dedicated ethics expert or team rather than adding this function to an existing role. “Having a dedicated department with a consistent approach that continues to evolve over time with the breadth of topics discussed and lessons learned from previous conversations and projects increases the success rate of just and responsible use of AI technologies,” Willemson explained. .

While the intentions of companies adding these roles are good, there is a risk that AI ethics experts will become temporary positions with no meaningful impact on the direction and decisions of the company. So how should companies pursuing ethical decision-making and responsible AI integrate AI ethics experts?

We asked technology and AI ethics experts from around the world how companies can achieve these goals. Best practices from experts can help companies transform ethics from a regulatory compliance challenge into a source of sustained competitive advantage.

AI ethics expert as technology educator

For some people, the term ‘ethicist’ may conjure up the image of a person lost in his own thoughts and disconnected from the day-to-day realities of the organization. In reality, the AI ​​ethics expert is a highly collaborative position that requires horizontal influence across the organization.

Joe Fennell, an AI ethics expert at the University of Cambridge in the UK, frequently provides consulting services to companies to educate them on ethics as well as performance and productivity. Fennell likened ethics to jiu-jitsu: “The higher you go in the belt, the more jiu-jitsu becomes less about the movements and more about the principles that influence the movements. “These are principles such as balance, leverage, and dynamism,” he emphasized.

Fennell approaches AI the same way. For example, to reduce the rate of illusion in generative AI, we do not require students to memorize specific phrases when teaching prompt engineering. Instead, it teaches broader principles, such as when to use examples and instructions to teach the model. Fennell says that by integrating these technologies into an overall methodology that considers safety and ethics, he gets people to pay attention to ethics.

Darren Menachemson, senior ethics expert at Australian design consultancy ThinkPlace, believes one of the core responsibilities of ethics professionals is communication, particularly around governance. “Governance means companies need to have a good understanding of the technologies that can actually control, mitigate and deal with risk,” Menachemson said. “This means that AI as a concept needs to be communicated well so that people understand its limitations and can use it responsibly.”

Of course, these guidelines also have cultural challenges. There is a so-called “let’s move fast and break the old” sentiment in the technology ecosystem, which has become especially stronger as AI spreads. “There is a sense of desperation in many companies,” Menachemson said. “There is an urgency to move quickly, keep pace with current trends, and take advantage of incredible opportunities that are too important and offer too many benefits to ignore.”

Menachemson points out that ethics professionals, especially senior ethics professionals, need three qualities to succeed despite these challenges: The first is understanding the nuances of AI technology and what level of risk these differences pose depending on the company’s preferences. The second is the will to engage stakeholders to “understand the business context in which AI is being introduced and provide specific guidance beyond general guidance.”

The third quality plays a key role in implementing the second attribute. “If you embarrass business users with technical or highly academic language, you lose their support and your opportunity to have real influence,” Menachemson says. “Senior ethics officers must be expert communicators and understand how to link ethical risks to top management’s strategic priorities,” he added.

Provides actionable guidance at two levels

Ethics may be subjective, but the work of AI or technology ethicists is by no means imprecise. When dealing with specific issues, such as user consent, ethics experts typically start from broad best practices and make recommendations tailored to the company.

Matthew Sample, an AI ethicist at Northeastern University and the Empirical AI Institute, told Computerworld, “We’re trying to explain what the current industry standard (or state-of-the-art) is for responsible AI and how to prioritize it among the various possibilities.” “For example, if a company is not auditing its AI models for safety, bias, monitoring over time, etc., that’s what they want to focus on.”

In addition to these best practices, Sample also provides detailed advice on how companies should operate on ethics. If no one in your company is thinking about AI ethics, you may need to focus on hiring new employees.

However, avoid making strong recommendations. “In the spirit of ethics, we don’t say, ‘This is the only right thing to do at this point,’” Sample added.

Menachemson takes two similar approaches to workflow. At the highest level, ethics experts provide general guidance on what the risks are for a particular matter and what possible mitigation and control measures are available. “But we also need to go deeper,” Menachemson points out.

This step should focus on your company’s unique situation and can be done after understanding the basic advice. “Only after this due diligence is completed can meaningful recommendations be made to the CEO or board,” Menachemson says. “Until due diligence is completed, we cannot be confident that we are actually controlling risk in a meaningful way,” he emphasized.

Cambridge’s Fennell believes AI ethicists should broaden rather than narrow the scope of what should be discussed, addressed and communicated. “The more comprehensively we address our AI ethics agenda and assessments, the more diverse our AI safety implementations will be, and likewise, the more robust our risk prevention and mitigation strategies will be,” Fennell said.

Everyone must be an ethicalist

Jesslyn Diamond, head of data ethics at Telus Digital, said her group uses red teams to anticipate unintended consequences of generative AI, such as potential misuse, identify gaps and even try to intentionally subvert systems. It is said to perform an action, such as trying. “We also use the concept of blue teams through purple teams to build innovative solutions that together can protect and improve outcomes,” Diamond explained.

The Purple Team is comprised of experts in various fields, including QA, customer service, finance, and policy. The non-deterministic nature of generative AI makes these diverse perspectives, inputs, and expertise highly necessary.

Diamond says forming a purple team creates an opportunity for different types of professionals to use the technology, helping them explore risks and unintended consequences – important considerations in ethics – as well as uncovering additional benefits. says

Telus also provides its employees with specialized training on concepts such as data governance, privacy, security, data ethics, and responsible AI. Employees then become data managers in their respective areas. To date, Telus has a network of over 500 data managers.

“Becoming more familiar with how AI works will allow people with a variety of expertise and backgrounds, both technically skilled and not, to participate in this important work,” Diamond explained.

It seems obvious that ethics should span many fields, but too many companies are relegating ethics to a corner of the organization. “To manage technology meaningfully, it is critical for people to understand the technology, and there must be a simultaneous tension between understanding and engagement,” Diamond said.

Creating a culture of ethical innovation

The goal of ethics consulting is not to create a service desk model where colleagues or clients always have to come back to the ethics expert for additional guidance. Ethics officers generally aim to ensure a degree of independence for stakeholders. “We want to make our partners self-sufficient,” Sample said. “I want to teach my partner so he can do this on his own,” he says.

Ethics professionals can promote ethics as a core company value, along with teamwork, agility, and innovation. The key to this change is understanding the goals of companies implementing AI. “If you believe that AI will transform business models, senior executives and boards have an obligation to ensure that AI is not disconnected from the organization, its people, and its customers,” Menachemson said.

This coordination is especially important in an environment where companies jump into AI without a clear strategic direction simply because the technology is popular. Gartner’s Willemson says a dedicated ethics expert or team could solve one of the fundamental problems surrounding AI. Willemson says one of the most frequently asked questions at board level, regardless of the project currently underway, is whether the company can use AI. “To some extent, that is understandable, but the second question, ‘Should we use AI?’ is almost always the question.” “It will be omitted,” he added.

Willemsen points out that we need to change the order of these two questions. “First, what are you trying to achieve? Let’s forget about AI for a moment. “That’s what you need to focus on first,” he said, emphasizing that most companies that take this approach achieve more certain success.

This simple question should be part of a larger program of corporate reflection and self-assessment. Willemsen believes companies can improve AI ethics by broadening the scope of their questions, asking hard questions, caring about the answers, and ultimately doing something with those answers. AI may be innovative, but we must carefully examine what benefits it may or may not provide to people.

“This includes not only the capabilities of AI technology and the extent to which the technology must be controlled to prevent unwanted outcomes, but also the inhumane conditions of the mining environment for the hardware that runs it, and the unprecedented power consumption and water use for data center cooling,” Willemsen said. “We must also consider the immeasurable damage caused,” he said.

According to Willemsen, companies that are fully aware of these issues and reflect them in their AI initiatives will benefit. Willemsen said, “The value of AI ethics may not be immediately visible. “But knowing what is right and what is wrong means, in the long term, the value and greater benefit of AI ethics: consistently applying the technology only where it is truly useful and makes sense.”
editor@itworld.co.kr

Source: www.itworld.co.kr