Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Nouvelles chaudes

AI ethics maturity model: A company guide

18 oct. 2021 Hi-network.com

Kathy Baxter,  Principal Architect of Ethical AI Practice at Salesforce

special feature

Managing AI and ML in the Enterprise

The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build.

Read now

According to the research by Kathy Baxter, principal architect for Salesforce's ethical AI practice, it is clear that artificial intelligence (AI) development is a high priority for enterprises, with worldwide AI spending expected to hit $110 billion in 2024. And of executives focused on AI, 80% are struggling to establish processes to ensure responsible AI use. Baxter's research also notes that 93% of consumers say companies have a responsibility to look beyond profit to impact society positively. In addition, 79% of the workforce would consider leaving an employer that demonstrates poor ethics. My conversation with business leaders regarding the future impact of AI often leads to core values, guiding principles and the ethical use of immerging new technologies like machine learning, natural language processing and chatbots, computer visioning, deep learning and smart robotics. 

As a Principal Architect of Ethical AI Practice at Salesforce, Baxter develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, Baxter worked at Google, eBay, and Oracle in User Experience Research. Baxter is the author of ""="">.

According to Baxter, developing ethical AI is not a nice-to-have but the responsibility of the entire organization. Baxter goes further by noting that responsible AI use is table stakes for businesses. Baxter references research notes that 86% of consumers say they would be more loyal -- and 69% say they would spend more money -- with a company that demonstrates good ethics.

I asked Baxter to share her team's findings on best establishing an ethical AI practice based on their proven methodology at Salesforce. At Salesforce and Tableau, Baxter and her team worked closely with all stakeholders (customers, employees, business partners and our communities) to develop and implement AI responsibly aimed to reduce bias and mitigate risks to the company and customers. 

Ethical AI Practice Core Values

Before we describe the AI maturity model, Baxter and her team believe that core values must be clearly defined and communicated. The core values begin with the notion that the benefits of AI should be accessible to everyone. Here is their research team's commitment statement: "We believe the benefits of AI should be accessible to everyone. But it is not enough to deliver only the technological capabilities of AI -- we also have an important responsibility to ensure that AI is safe and inclusive for all. We take that responsibility seriously and are committed to providing our employees, customers, and partners with the tools they need to develop and use AI safely, accurately, and ethically." 

5 core value pillars further define the AI practice commitment: 

  1. Responsible:To safeguard human rights and protect the data we are entrusted with, we work with human rights experts and educate, empower and share our research with customers and partners.
  2. Accountable:To create AI accountability, we seek stakeholders feedback, take guidance from the Ethical Use Advisory Council, and conduct our own data science review board.
  3. Transparent:We strive for model explainability and clear usage terms and ensure customers control their own data & models.
  4. Empowering:Accessible AI promotes growth and increased employment and benefits society as a whole. 
  5. Inclusive:AI should respect the values of all those impacted, not just those of its creators. To achieve this, we test models with diverse data sets, seek to understand their impact and build inclusive teams.

Trusted AI Principles (from principles to practice)

Ethical AI Practice Maturity Model 

There are four stages in the AI practice maturity model: ad hoc, organized and repeatable, managed and sustainable and optimized and innovative. 

  1. Ad Hoc:In the ad hoc stage of the maturity model, individuals begin identifying unintended consequences and informally advocating for the need to consider bias, fairness, accountability, and transparency in their companies' AI. This advocacy creates a groundswell of awareness, empowering people to pause and ask not just "can we do this?" but "should we do this?" That initial momentum comes easier with encouragement. Creating a discussion group on internal social media channels is a great way to share knowledge, excitement and identify advocates for the work. Informal tech talks are another excellent resource.

    Baxter reminds us that historically, early advocates for this approach have taken on full-time roles within their companies to build an ethical AI practice. The process of having this formal role created and filled can take a year or more of building trust among leaders and demonstrating the importance of developing AI responsibly. However, as more executives see the importance of a responsible AI practice, companies without an internal advocate are now looking to hire from outside.

    Entire teams and dedicated budgets do not emerge overnight, so ethics reviews by the lone ethics expert are often ad-hoc and limited to individuals or small teams that have bought into the importance of a responsible AI practice. These small successes are critical in building up a portfolio of "wins" and earning more advocates across the company.

  2. Organized and repeatable:At this stage, executive buy-in has been established, and the company is developing a culture where responsible AI practices are rewarded. Part of this culture creation is the development of a set of ethical principles and guidelines. Virtually every company with an ethical AI team - including Salesforce (einstein.ai/ethics) - has published a set of guiding principles.
    Baxter reminds us that simply taking a generic set of principles and publishing them on your company website will likely be little more than "ethics washing" and result in minimal change. 
  3. Managed and sustainable: Depending on the size of your company and success at educating existing employees, you may be able to shift your focus to ensuring new employees know what their role is in ensuring responsible AI. Employees at many companies have a lot of mandatory training to attend, so it is worth considering how much training should be mandatory. Every employee working on AI should at least know your ethical AI principles and any customer restrictions on how your AI can be used (for example, at Salesforce, we do not allow our vision AI to be used for facial recognition).
    At this point, your company has introduced ethics checkpoints throughout the product lifecycle. Formal processes like consequence scanning workshops, ethics canvas, harms modeling, community juries, and creation of documentation like model cards (like nutrition labels for models) or FactSheets are implemented and required by management. The addition of new processes and documentation will likely grow as your practice matures.
    Baxter reminds us of the importance of equality and inclusion. "A mix of professional experience in human rights, ethics and philosophy, user research, AI, policy and regulations, as well as data science, product and program management, will also yield better outcomes. Diversity is your superpower because different value systems require different mechanisms for fair decision-making," said Baxter.
    Baxter is also a strong advocate for independence. She reminds us that independence is required for honesty and integrity in your ethical AI practice. "Part of creating a successful practice is understanding the inherent value of critical perspectives and incorporating into critical decision making. Creation of an external AI Ethics Advisory Council can provide significant value by providing alternative points of view and avoiding echo chambers of thought," said Baxter.
  4.   Responsible AI development lifecycle

  5. Optimized and innovative: This is the end state you are striving for. But we intentionally refer to our work as a "practice" because the goal is a continuous improvement -- there is no such thing as "perfection" in this work. As new AI applications and methodologies are developed, new ethical risks are identified, and new ways of mitigating them may be needed.
    In order to create end-to-end ethics-by-design, mature AI ethics practices combine ethical AI product development and engineering with privacy, legal, user research, design, and accessibility partners to create a holistic approach to the development, marketing, sale, and implementation of AI. 
    "You may also have moved from a large centralized AI ethics team to a hybrid or hub-and-spoke model. In the hybrid model, a centralized ethics team owns standards and the creation of new processes while individual ethicists are embedded in AI product teams to provide dedicated, context-specific, and timely expertise.There is no one "right" model;it depends on the size of your company, the number of product teams building AI applications, how diverse those offerings are, your company's culture, and more," said Baxter.

Ethical AI Practice Maturity Model

Baxter also warns companies about metrics and the important role they play in developingAI-powered products and services. "Product roadmaps and resources should explicitly require that ethical debt is addressed and features to help customers use your AI responsibly are regularly developed. With the prior establishment of metrics, it is now possible to set minimum ethics thresholds for launch in order to block the launch of any new product or feature that does not meet that threshold," said Baxter. 

A strong AI ethical practice will include multiple success metrics that are deeply understood and discussed regularly prior to new product launches. 

"The Ethical AI field is relatively new, and we are all learning together as we understand risks and harms associated with certain AI technologies or applications of them to different populations. The proposed maturity model will change as our understanding and practice develops, and it is our hope that we can co-create this field together," said Baxter. Baxter and her team regularly post articles about AI ethics that can be found at https://einstein.ai/ethics. 


This article was co-authored by Kathy Baxter. As a Principal Architect of Ethical AI Practice at Salesforce, Kathy develops research-informed best practice to educate Salesforce employees, customers, and the industry on the development of responsible AI. She collaborates and partners with external AI and ethics experts to continuously evolve Salesforce policies, practices, and products. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. She received her MS in Engineering Psychology and BS in Applied Psychology from the Georgia Institute of Technology. The second edition of her book, "Understanding your users," was published in May 2015. You can read about her current research at einstein.ai/ethics.

 

Artificial Intelligence

7 advanced ChatGPT prompt-writing tips you need to knowThe 10 best ChatGPT plugins of 2023 (and how to make the most of them)I've tested a lot of AI tools for work. These are my 5 favorite so farHuman or bot? This Turing test game puts your AI-spotting skills to the test
  • 7 advanced ChatGPT prompt-writing tips you need to know
  • The 10 best ChatGPT plugins of 2023 (and how to make the most of them)
  • I've tested a lot of AI tools for work. These are my 5 favorite so far
  • Human or bot? This Turing test game puts your AI-spotting skills to the test

tag-icon Tags chauds: Intelligence artificielle Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.