Human Portfolio Optimization:

Navigating the power, promise and perils of AI.

This paper introduces a novel approach for organizations to manage the complexities of AI, ESG, and DEI in a rapidly changing global economy to capitalize on diversity and drive sustainable growth.

Human Portfolio Optimization:

Navigating the power, promise and perils of AI.

FPO

Summary Abstract

Artificial Intelligence (AI) is exploding in business and consumer technologies that underpin our everyday lives. These new capabilities are filled with great promise for driving unprecedented productivity, improving quality of life, and catalyzing growth for individuals and organizations. 

However, they are also riddled with a myriad of perils that are emerging throughout the public and private sector and threaten to jeopardize the stability of our economic and sociopolitical systems. Meanwhile, the rise of Environmental Social & Governance (ESG) scoring models and Diversity Equity and Inclusion (DEI) imperatives has increased pressure on organizations to understand the challenges and opportunities of AI, particularly as it relates to stakeholder impact across employees, customers and suppliers. 

Taken together, the combination of AI, ESG and DEI has created a “perfect storm” for organizations as they struggle to remain relevant and competitive in a hyper-dynamic global economy. Layer in a plethora of new regulatory mandates and a powerful, vocal sustainability movement and you have a daunting or even intractable challenge facing organizations large and small across every industry and geography. Most importantly, there’s a massive, unrealized opportunity in capitalizing on the power of diversity across all stakeholder groups as recent studies are seeing across all key performance metrics—revenue, profitability, cashflow, productivity, innovation, retention and more. So how should leaders begin their journey of understanding what all of this means to their unique business model, start regaining control, and drive change to maximize sustainable growth? 

This paper proposes an innovative but straight-forward approach for navigating that perfect storm to ultimately emerge stronger and achieve competitive advantage. By applying concepts from investment portfolio theory, all organizations have the opportunity to uncover their “Efficient Frontier” or proverbial sweet spot of stakeholder groups that balances risk and return given the unique business model and external world in which they operate.

The Promise & Peril of AI

AI Technologies in an Interconnected Global Economy

AI is absolutely everywhere these days—headline after headline, podcast after podcast. While a great deal of excitement exists around the power and promise of AI, there’s also the specter of unprecedented risks and widespread existential fear about what it means for us as individuals and an interconnected, global economy. Despite the fact that AI has been emerging for a few decades, our journey is still relatively nascent. Essentially, it’s a massive iceberg, and we’re just starting to uncover the technological capabilities that lie beneath the surface.

There are plenty of stakeholders who might wish we could avoid this new chapter in our technological innovation altogether, but the evidence is clear—we’re barreling toward it, whether we like it or not.

The global AI market is growing at a 35.6% compounded annual growth rate (CAGR), which will expand the spend from $30B to $300B between 2020 and 2028.1

Meanwhile, 70% of organizations are expected to adopt some form of AI technology by 2030, up from today’s 33%.2 The Covid-19 pandemic alone accelerated AI adoption plans for 52% of organizations in 2020, and 67% expect to further accelerate their investment initiatives in 2022. AI is already deeply embedded in our economy as 86% of organizations now consider it to be a “mainstream technology” versus something aspirational reserved for early adopters or bigger, more sophisticated firms.3

This ubiquitous expansion has created an unavoidable charge for every organization—and every leader within it—to study and understand AI. The imperative is to grasp how it impacts (or could impact) their organization, determine the challenges and risks imposed by it, and define how to respond in a way that drives competitiveness and strong performance, while also remaining compliant. That journey begins with taking stock of recent and emerging developments. The good news is that there’s a portfolio of data already available through a wide variety of use cases across industries, which is rich with insights that can help us navigate the path forward. These examples, and the experiences gained through them, can provide a clearer picture of what’s working and where things are going off the rails. They can teach us what’s needed to maximize the benefits of AI while also mitigating the downside risk.

Runway Growth or Train?

AI: A Runway for Growth or a Runaway Freight Train?

To gain more context around the opportunities and challenges at hand, let’s consider a few scenarios that underscore how the promise of AI is being compromised by the risks it can pose to stakeholders and organizations across a wide variety of industries and functions:
  • Consumer Credit Scoring: Wells Fargo was recently cited for excessive rejection rates on mortgage refinancing applications for people of color.4 Beyond anomalously high rejection rates and interest pricing, Black, Indigenous, and People of Color (BIPOC) are struggling with a biased property appraisal system that is further exacerbating the barriers to wealth generation and full participation in the economy.5 The banking industry has also been under pressure for creating a troubling bias within AI-based credit/risk scoring systems more broadly. In one case, systems were yielding significantly higher average interest rates for black borrowers versus white borrowers with the exact same income and credit risk profile—an unfair discrepancy that has led to billions of dollars in excessive interest payments, which effectively created a wealth building barrier in the black community.6 In another case, an AI-based banking system rejected black mortgage applicants at a rate 80% higher than white applicants with similar income levels and credit risk profiles.

  • Marketing Segmentation: Airbnb launched an AI-based price optimization platform with the goal of mitigating racial inequities across the supply and demand dynamics—but the system unexpectedly exacerbated the divide because black hosts were significantly less likely to adopt the technology than white hosts.7

  • Talent Acquisition: Amazon was forced to scrap an AI-based recruiting system after three years of investment because the model eliminated women from the applicant pool for open roles and advancement opportunities.8 Meanwhile, we know that 94% of applicant tracking systems powered by AI arbitrarily eliminate 88%-94% of qualified applicants—most of whom are women and minorities—driven by algorithmic models and training data that reject candidates based on attributes unrelated to job qualification.9

  • Patient Healthcare: Optum and other healthcare organizations have been criticized for algorithms that resulted in less money being spent on caring for black patients versus white patients—based on models that falsely concluded that black patients were healthier than equally sick white patients.10 In another scenario, a widely used AI-based patient scoring model called NarxCare incorrectly and unfairly tagged patients as “drug seeking” based on incorrect cumulative histories of prescriptions, and in one case, even incorporated data from a patient’s pet medication protocols post-surgery.11

  • Public Sector Citizen Benefits: In the Netherlands, an AI-based system wrongly accused approximately 26,000 parents of making fraudulent benefit claims, requiring them to pay back reimbursements that totaled thousands—in some cases, tens of thousands—of Euros, precipitating financial hardship for many families, most of whom were people of color.12

  • Criminal Justice System Sentencing: U.S. court systems are using AI-based technology to define a risk score for each defendant, which has been found to have a disproportionate impact on BIPOC communities because the systems have been trained on historical crime data. At its core, the problem is pinpointing attributes like low income as causal versus correlational—and fueling the system with statistics plagued by systemic racism perpetuated across decades.13

These examples—just a few among hundreds—give credence to the widespread concern over the adverse impacts that AI poses, not to mention what lies on the horizon. 

Organizations are worried that they will get hit with imminent regulatory costs and complexities. But more pressing for executive teams and boards is the fear that a competitor will beat them to the punch in implementing game-changing AI technologies to drive greater growth and productivity.

At the same time, workers are terrified that AI-based innovations will render them irrelevant in the economy—devaluing their hard-earned skills and the career paths they have pursued.

So, the big question is: How do we successfully embrace the opportunities and effectively meet the challenges, while remaining good corporate citizens and human beings?

The Efficient Frontier for Human Portfolios

Finding a Solution: Uncovering the Efficient Frontier for Your Human Portfolios

Most organizations are endeavoring to do the right thing for all stakeholders—maximize shareholder value and be accountable to their boards; stay relentlessly compliant in the face of ever-evolving global regulations; and exemplify good corporate citizenship. At a minimum, strong corporate leaders are trying to “do no harm” while aspiring to be part of the “good guys” working to improve the environment, achieve equality of opportunity, and mitigate a myriad of risks with sound business practices and accountable organizations. However, knowing clearly what “good” looks like and how to achieve it is becoming increasingly elusive and opaque. Yet you have no choice but to figure it out or else risk compromising all of the things your team is working so hard to get right—the brand you have built, the customers you have amassed, the amazing products and intellectual property you have created, the financial performance you have driven for your top and bottom lines.
Beyond getting ahead of the challenges and risk organizations inevitably face with all of the dynamics described in this paper, there is also a massive opportunity that a richer human portfolio picture presents on so many fronts—in terms of your all-important talent, market opportunity and customer base, and interconnected supply chain network. Staying in your comfort zone inevitably limits the world of possibilities that might await and drive unprecedented levels of success.
The numbers emerging from study after study indicate that the return on diversity, or RoD, is quite compelling across all key performance indicators, including:14
  • Performance: 15% (gender) to 35% (ethnic) higher
    Top quartile diverse companies are more likely to financially outperform their national industry medians by 35% for ethnic diversity and 15% for gender diversity (McKinsey).
  • Revenues: 19% higher
    Diverse management teams deliver 19% higher revenues from innovation compared to their less diverse counterparts (BCG).
  • Cashflow: 2.3x higher per employee
    Companies with a diverse set of employees enjoy 2.3 times higher cash flow per employee (Bersin).
  • EBITDA: 3.5% increase
    In the UK, for every 10% increase in gender diversity on the senior executive team, EBIT rose by 3.5 percent (McKinsey).
  • Culture: 26% more collaboration, 18% more commitment
    Employees in highly diverse and inclusive organizations show 26% more team
    collaboration and 18% more team commitment than those in non-inclusive organizations (CEB/Gartner).
  • Productivity: 2x faster
    Teams that follow an inclusive process make decisions 2x faster with 1/2 the meetings (Forbes).
  • Retention: 3x stronger
    Inclusive companies are 3x more likely to retain millennials 5+ years (Deloitte).

A consumable and actionable departure point for achieving your organization’s unique version of “good” can be taking stock of where you stand on the human portfolio optimization front.

So much power and opportunity exist in understanding how you compare to peers or competitors, and the population at large across all major stakeholder groups, including your employees, customers, and suppliers. Gaining visibility into the demographic nuances that comprise your picture on all of the relevant use cases that underpin and drive your organization’s success can tell you so much about strengths and vulnerabilities.
Meanwhile, instituting human portfolio optimization can become foundational for sparking strategic conversations leadership needs to have about your past, present and future. This approach can provide clarity on current state as well as what is possible, helping your team to set realistic expectations for your journey to becoming the best version of who and what you can be. Ultimately, what an outside-in view of your organization and its stakeholders can provide is clarity on your human portfolios, or what the “Efficient Frontier” looks like for all stakeholder groups—employees, customers, patients, citizens, suppliers—that maximizes financial performance along with other mission-critical sustainability objectives.

Section Two

The Perfect Storm:

What Makes Responsible AI So Challenging?

Where do we begin to make such a complex imperative more consumable? Let’s begin with more context around how we got to this point and why the problem is so difficult for all organizations— across all geographies, verticals, and sizes. There are four key dynamics at play that have conspired to create this Catch-22 situation:
The Perfect Storm

Increasing pressure from everywhere

There is ubiquitous but conflicting pressure on organizations related to AI. On the “pro” side of the AI adoption equation, you have boards and other stakeholders demanding that firms capitalize on AI to maximize productivity and ultimately profitability. As noted earlier, adoption of AI is expected to explode over the coming decade as at least 70% of organizations incorporate the technology into their core business processes and the systems that underpin them.

At the same time, there is significant negative pressure in the form of over 600 impending regulations related to AI, most coming out of the European Union and Canada as well as the Algorithmic Accountability Act in the U.S.

Penalties will likely be daunting as the European Commission is recommending a fine of 6% of annual revenue for irresponsible AI, steeper than the 4% now in place for Global Data Privacy Regulation (GDPR) that monitors the use of personal data in corporate, consumer, healthcare and public systems.15 Stakeholder scrutiny is also intensifying as they are justifiably concerned about the adverse impact that AI-based systems may have—on consumers, the environment, our democracy, and the global interconnected economy. As we noted earlier, so many organizations are landing on the front page due to the adverse outcomes they have caused with their systems. While the damage is largely unintentional, it is nonetheless detrimental to stakeholders as well as brand image, operational effectiveness, and market value.
Beyond the direct pressures related to achieving responsible AI, there are broader forces at play related to ESG scores and DEI mandates. Both have emerged as important imperatives for all organizations, particularly given increasing awareness of the impact bias and discrimination based on race, gender and other factors has had on our economy and sociopolitical landscape. Meanwhile, ESG increasingly is used to determine whether an organization is a “good corporate citizen,” or what has also been referred to as “stakeholder capitalism” or “sustainability” as an area of study for investment teams.
Breaking down the acronym, scrutiny of Environmental practices has focused primarily on carbon footprint metrics, though this is rightly expanding. The Social component has centered on DEI related imperatives, particularly around diversity in talent at the employee and board levels as well as equal pay initiatives. Governance has focused on risk-management related activities, such as board practices, fiscal and fiduciary responsibility, cybersecurity, operating disruption, and disciplines related to regulatory or compliance mandates.
Recent studies have underscored the increasing importance of ESG scores in driving market value for public companies. Global ESG investing is projected to comprise one-third of total assets under management (AUM) by 2025, materially reshaping the $140.5 trillion industry.16

One study found that a positive change in an ESG score precipitates a 3.94% increase in stock price on average while a decreased ranking leads to a decline in valuation of 1.85%—a difference of almost 6%.17

Another study showed that it was not enough to “talk the talk,” or focus on disclosure only—only organizations that truly incorporated ESG values and initiatives into their core strategic imperatives and operating model realized a benefit in their market values.18

Like public company markets, private equity has undergone a material shift in the last twelve months with the momentum behind ESG. Over $3.1 trillion has been committed to ESG, which is more than one-third (36%) of all private capital under management.19 Blackstone group has been leading the pack with several initiatives, including the hiring of Jean Rogers, founder of the Sustainability Accounting Standards Board, to be its first Global Head of ESG for Private Equity, setting a new standard that is catalyzing a domino effect across the interconnected global economy.20 With that shift, there has been a move to operationalize ESG investing in private equity organizations as they evolve their team’s compensation models and skill sets to be anchored on ESG-related metrics.21
All of these conflicting pressures can be paralyzing for organizations trying to do the right thing, particularly given the imperative to build competitive advantage while navigating uncharted risk territory and doing everything possible to be conscientious corporate citizens.
The Perfect Storm

What ‘good’ looks like

Lack of clarity on what ‘good’ looks like

Despite the increasing importance of ESG scores and unavoidable impact on critical performance metrics, navigating the maze of people, processes and technology that surround these models is challenging due to the lack of consensus around what “good” looks like. Currently, there are somewhere between 125-600+ different organizations providing unique scoring models for ESG, each with its own methodology for how each major category is calculated.22 Even more challenging than the proliferation of these scoring models and any sense of definitive authority, is the lack of transparency about the methodologies used to calculate them. Again, ESG scores are netted out as a single number, which represents where an organization ranks relative to other firms in terms of quartiles. This aggregated value makes it difficult for an individual firm endeavoring to be a good corporate citizen to know where they stand on the three separate dimensions of E, S and G, each of which is significant in its own right and requires a distinct portfolio of strategic initiatives.
Beyond the challenge with a single score that nets out ESG status, an additional consideration is the importance of benchmarking organizations against peers or competitors as opposed to all other organizations operating in the world. More specifically, factors such as our region or geo focus, industry vertical, and company size (e.g., annual revenue, number of customers, and number of employees) are material when considering how an organization is doing on all three dimensions. The organization’s core function is also material in determining the ideal or realistic range of values for key performance indicators (KPIs) that comprise the ESG scoring model, such as energy outputs, employee diversity, governance-related controls, and enterprise technology systems. For example, a large, multinational energy company like British Petroleum should be considered differently from a U.S.-based healthcare organization like Optum or global consumer products manufacturer like Nestle or Adidas. Moreover, carefully defining peer groups for benchmarking is important for providing relevant, realistic, and achievable information needed to help individual organizations take concrete actions and pursue new initiatives that can enable them to become the best version of themselves given their unique business model.

As ESG investing has grown, so have global compliance mandates anchored in ESG as the cumulative number of policy interventions emerging out of both public and private sector initiatives has grown from just a handful in the early 2000s to over 700.23

Navigating the global regulatory landscape is confusing and daunting for both private and public companies. In June 2021, H.R. 1187, the ESG Disclosure and Simplification Act, would require the SEC to create a more defined scoring criteria and reporting standards. However, that regulatory mandate has yet to take shape because the bill has been stalled in the Senate after being passed by the House of Representatives.24 Meanwhile, pressure is mounting from the press and other stakeholders that are calling out corporate America. In just one example, a recent Washington Post article noted that the Top 50 firms had committed $50B to DEI mandates around education and employment, as well as access to credit, home ownership and entrepreneurial capital, but had nothing to show for these commitments some 18 months later.25
All of these examples highlight how all stakeholders across the global economy are struggling to define what “good” looks like, both from a regulatory and operational best practices standpoint. The demand for sustainability clearly exists and the supply side is largely eager to engage, yet everyone—from private to public sector leaders, public and private capital markets, and small to large organizations across every industry vertical—is struggling to make sense of how to get after this complex, daunting and elusive challenge and opportunity.
The Perfect Storm

AI Skills

AI skills scarcity, system complexity, and dynamism

Now that we have a better sense of the risks, opportunities, and mandates facing us, let’s turn attention to solutions. Most important are questions that explore who is able to help us navigate the challenge and opportunity, or what the team looks like that can help define the opportunity and create systems that are efficacious, efficient, and ethical:
  • Product Management: Do we have anyone in the organization with the business expertise needed to define all of the use cases where AI can be applied?

  • Data Science & Engineering: How can we find the person or team that knows how to develop the algorithms needed to build a system for the relevant use cases?

  • Model Training Data: Are we able to access the right data to train the model given the massive volume that is required to train the model effectively?
Beyond these basic questions, there are still more to consider in terms of the expertise needed to maximize the opportunity and mitigate inevitable risks, such as:
  • Sociology/ Stakeholder Behavior: Who is able to think through exogenous factors that might also have a material impact on the outcomes of our systems and ultimate objectives we are trying to achieve?

  • Compliance: Do we have anyone who is an expert in what the global regulatory landscape looks like and how it pertains to this particular system, particularly how it is designed and operated, what the outputs and actions related to it will be, and our unique geo and industry considerations?

  • System Monitoring: Is there someone responsible for tracking the system’s performance continuously? How will we determine if the system is achieving the intended outcomes and avoiding adverse impacts? Do we have a clear approach for how we will calibrate the model, data, and other factors to ensure it is running optimally over time?

  • Progress Tracking: Do we have a strong sense of the KPIs or metrics that should be applied to clearly understand if we are achieving our goals? Will we be able to see how things are trending dynamically in a positive or negative direction?

  • Cross-functional Engagement: How will we engage cross-functional teams and broader organization around these initiatives to drive engagement and ultimately success?

  • External Reporting: Have we defined how our organization will report on all of these systems as part of ESG scoring models and/or DEI imperatives? Have we determined how that reporting output relates to quarterly board updates, SEC Filings, and Annual Report publications?
If you are thinking “no’’ or “I don’t know,” you’re in good company because the people with skills required to achieve any of these things are difficult to find. Furthermore, if you operate in a vertical that is not necessarily “techie,” or are located in a region where strong technical talent is scarce (i.e., outside of cities like San Francisco, Boston, New York), you are going to struggle with getting the right team and operating model in place—and yet you have no choice but to figure it out to remain relevant.
To underscore the skills scarcity challenge, let’s consider a few statistics. First, job growth in AI related positions have grown dramatically with an increase of 344% for machine learning engineer and 78% for data scientists between 2015 and 2019.26 Compounding the supply and demand disconnect in the labor market, these roles require the best of the best with at least five years of experience working as a machine learning engineer. At the same time, there are virtually no entry level opportunities available, which is further exacerbating the challenge.27 Beyond the data science experts, even tougher to find is the business domain expertise—strong product managers who team up with the algorithmic gurus to provide the context on the market dynamics, the challenge and opportunity, the system’s business requirements, and what a good solution outcome looks like.28
Moving from the “who” to the “what,” recent studies have shown that 96% of organizations trying to implement AI are struggling with securing the right quantity and quality of training data, as models require over 100,000 data samples to perform well.29 Even more sobering is that 90% of the models built by hard-working teams have never made it into production because the ongoing efforts and investment required to realize “good” outputs became too much. The projects were ultimately discontinued; the investments made in them were deemed sunk costs.30
Last but not least, building efficacious AI models has a massively negative impact on the environment. New estimates indicate that training just a single model creates a carbon footprint equivalent to five times the lifetime emissions of an average car. Or put differently, building one AI model that works well has the same environmental impact as five automobiles across their average lifecycle.31 Understanding all of the relevant variables and carefully calculating the cost-benefit of these models in light of all relevant factors is unavoidably critical as part of corporate responsibility, sustainability, and financial success.
The Perfect Storm

The Big Divide

‘Grand Canyon’ divide in opportunity, income, and wealth

This is where the challenge becomes particularly interesting and thought-provoking. An overwhelming disconnect has been emerging that threatens the core of our capitalist economy and democracy. For both to continue functioning effectively, they require our diverse population to be engaged as productive citizens and consumers. And yet, as we have seen throughout the Covid-19 pandemic in stark and painful ways, that is simply not the case. Beyond the political and cultural divide we face, a recent September U.S. jobs report highlighted one of the many disconnects with an astounding statistic. While there are 12 million jobs available, there are only 10 million job seekers.32 Furthermore, as we cited earlier with the data scientist demand versus supply gap, this disconnect is significantly worse than it appears given the highly advanced skills and multiple years of experience required.
Recent studies shed light into a few fundamental sources of our daunting challenge. One astounding statistic is that AI-based applicant tracking systems, which now manage approximately 90% of job market activity in the U.S., arbitrarily eliminate over 94% of all qualified applicants, most of whom are women and people of color.33 As Cathy O’Neil noted in Weapons of Math Destruction, the problem is compounded by employers’ use of candidates’ credit scores, which have been derived by AI based systems using approaches known to be racially biased.34

All these factors have conspired to create the ultimate vicious cycle that has left our socioeconomic infrastructure, particularly related to our democracy and capitalistic system, in a fragile state that requires us to work individually and together to overcome.

In the many cases where AI has yielded suboptimal outcomes with widespread negative consequences for entire groups of people, the unfortunate dynamic has taken place for several reasons, including:
  1. Expert Talent Scarcity: Highly advanced, cross-disciplinary teams that include technical, business, and regulatory acumen are required to do this well and as we have noted, they can be nearly impossible to assemble.

  2. Modeling & Data Quality: Many cases require significantly more rigor around algorithmic modeling and robust volumes of data required to train systems effectively. Unfortunately, historical data used to train systems is often compromised by bias or other factors. Without careful consideration, even well designed and trained systems can result in disproportionate impact as we saw in the case of Amazon and Wells Fargo, perpetuating vicious cycles of poverty, systemic racism and other social ills.

  3. Unexpected, Exogenous Social Factors: Some organizations struggle with unprecedented and unforeseen dynamics in the core models or factors surrounding them that lead to inaccurate or unfortunate results as evidenced by the Airbnb challenge.

  4. Myopic Focus on Performance: Our hyper-competitive, interconnected global economy has created unprecedented pressure that can lead to unfettered, singular focus on growth and productivity at all costs. This myopia can lead organizations to avoid careful consideration of the intersection of other objectives that need to be achieved beyond competitiveness, particularly compliance and conscious capitalism.

  5. Organizational Neglect: AI-based systems and the components that underpin them are dynamic and require continuous monitoring and calibration, yet many organizations “set it and forget it,” so outcomes inevitably veer off the rails and yield suboptimal results.
Negative impacts resulting from AI-based systems are rarely intentional, yet the adverse impact on individuals, groups, and society more broadly has already been profound. So how should an organization, or the individuals who are leading it, equip themselves to know what AI means—to the organization, its products and services, markets and competitors, suppliers, and customers?
The AI story is not all gloom and doom—there is so much opportunity that these new technologies present to us as a society to automate tasks that are tedious, error-prone, or even dangerous, while yielding unprecedented productivity and the gains that come with it. Given all of these factors, how do we capitalize on the best of what AI has to offer while being thoughtful about the inevitable risks and negative consequences or impacts that can accompany it?

Section Two

The Opportunity:

Solving for the C3 Imperative

Organizations should optimize competitiveness, compliance, and conscious capitalism by leveraging AI to enhance performance, ensure regulatory compliance, and promote ESG/DEI objectives.

The Opportunity

The C3 Imperative

To create a digestible framework for the role of AI in business, organizations need to figure out how to optimize three key factors: competitiveness, compliance, and conscious capitalism. What do we mean by that?
  1. Competitiveness: Are we performing well compared to our peers and delivering on financial results for our stakeholders? What role can AI play in making us more competitive and driving stronger results from a top and bottom-line perspective? What use cases are most ripe for AI in driving faster decision-making and better outcomes?

  2. Compliance: Are we in good standing with the relevant regulatory bodies that we are accountable to, the stakeholders we engage, and the regions where we operate? Are there ways that AI could help us achieve global compliance more efficiently and effectively, or perhaps, compromise our regulatory-related risks in one way or the other?

  3. Conscious Capitalism: Are we considered to be good corporate citizens of the world from an ESG and/or DEI perspective? What role can AI play in helping us further the objectives we need to achieve? Do we have AI-based systems in place that might compromise these objectives for us as an organization or society more broadly? What can we do proactively to ensure our systems are “doing the right thing” as part of optimizing outcomes for our workforce, customers, and other stakeholders?
This big, audacious goal for AI could be inherently conflicting if not done thoughtfully. So what is the best way to get started with an approach that is impactful yet entirely consumable and actionable by any organization?

How do we make sense of our role in doing “good” for all of our stakeholders, particularly as it relates to the C3 imperative for corporate excellence?

The first step is figuring out how to answer the fundamental question, “Am I good?” By “good”, we mean several things—good from an efficacy or productivity standpoint to ensure you’re as competitive as you can be in a hyper-dynamic global economy; good from a compliance standpoint so you can avoid having your top- and bottom-line achievements foiled by regulatory violations or damage caused by algorithmic indiscretions; and good from a stakeholder standpoint where you are part of the solution for ensuring equal opportunity, not a root of the problem.
All of this can sound quite esoteric and insurmountable, but the genius lies in the simplicity of starting with visibility into how you compare through a series of exploratory questions, such as:
  • How does your applicant pool look from a demographic diversity standpoint versus the general population?

  • How does your workforce portfolio stack up compared to the people who are available to you with the relevant skill set?

  • What does the portfolio of human attributes look like for your peers across various roles in the organization compared to what you have achieved?

  • How do the outputs of your consumer risk scoring system compare to the available cohort of consumers in your relevant community?

  • Do your digital marketing systems target the right mix of stakeholders from a demographic standpoint versus what is available in the marketplace?

  • Are your pricing optimization solutions being applied in ways that are fair from a demographic standpoint or are you yielding outcomes that are correlated too strongly to a demographic attribute?
The challenge and opportunity around the C3 imperative are that it’s very similar to personal fitness as there are myriad versions of what “good” looks like because we are all different—male/female/ non-binary, short/tall, slim/stocky, high/low metabolism, fast twitch/slow twitch, etc. And yet the thing we all share is that finding our “fit” is a unique set of activities and outcomes that make sense for each of us, and then once we find the sweet spot for ourselves maintaining that distinct version of our fit selves is a relentless effort; day after day in a world that is changing and we are evolving with it as we age, face adversity and triumphs, have more or less freedom and constraints.
Such is the life of an organization endeavoring to maximize its best, unique version of “good” continuously across time and constant change. This is not a binary concept that can be applied bluntly across all geographies, industries, and companies of all sizes, but instead more of a subjective moving target. That said, an organization’s position on all of these demographic attributes compared to peers and the general population is completely measurable and can provide invaluable visibility and insight into strengths and vulnerabilities. Furthermore, this level of detailed benchmarking provides very specific, prescriptive context needed to drive continuous improvement. More specifically, human portfolio benchmarking can help organizations know when they need to tune algorithmic models, bolster training data sets, and/or incorporate external factors that materially impact outcomes (a la Airbnb’s price optimization system noted earlier).
Organizations can benefit from starting with the question, “Am I good?”, as it can serve as the  jumping off point for self-discovery among executives, boards, and the broader organization.  This exploration can define a sense of purpose and a bird’s eye view from point A to point B that everyone can engage in as a team.

By looking at the intersection of competitiveness,  compliance, and conscious capitalism as it relates to your unique organization and a portfolio of concrete metrics that inform where you stand with respect to each of them, an opportunity exists to see your organization more clearly, define a future state you want to achieve, and  then pursue the initiatives that can drive you to that point.

For the purposes of this exercise, let’s focus mainly on the Social component of the ESG  equation. Historically, this category has been relatively ill-defined in terms of guideline metrics  and targets, but where there has been something prescriptive, it has typically focused on  things like DEI as it relates to the employee base and board of directors. More specifically, do  the organization’s hiring practices enable a reasonable level of diversity within the workforce  and can it demonstrate parity between men and women on compensation in all forms (e.g.,  base, bonus, stock, and other benefits) for a given role given comparable experience,  responsibility and contribution? Beyond DEI in the workforce and board level, the only other  evaluative criteria seem to be mostly focused on community engagement through volunteer  projects and other types of contributions for non-profit initiatives such as affordable housing,  education, healthcare, or other types of services for disadvantaged people and their families.

Given everything we now know about the dynamics presented by the new world order, let’s endeavor to define more clearly what should be tracked by organizations so they can mitigate risks and simultaneously capitalize on opportunities presented to them. This approach would help leadership achieve a simple, straight-forward approach that can be broadly applied, regardless of the specific use case, and consumable by anyone in the organization or other relevant stakeholders who want to understand the impact of various systems. By starting with a look at demographic portfolios for the organization itself and then how that compares to relevant peer competitors and the general population more broadly, we can quickly and easily ascertain where we stand on the “Are we good?” question. More specifically, we might glean that our gender diversity is not what it could be, or that our ethnic/racial mix could be much stronger based on what is available from the general population we engage and serve.
That demographic context gives us a point-in-time look at where we stand, but the challenge is that all of these variables are dynamic—i.e., our systems and their outputs are ever-evolving; the data going into and out of them is a moving target; and the benchmarking cohorts (competitors and general population) are constantly changing. As a result, knowing where you stand on the benchmarking front is a departure point for understanding who you are and who you want to be, where you are now, and where you could go, but it is just the start of your journey. Once you have decided on the results you want to achieve and the initiatives that will help move you from “x” to “y,” it is important to dynamically monitor how things are going, recognizing the fluid nature of the variables that underpin your current and future state. Ongoing monitoring can surface achievements toward and deviations from where you want to go, creating opportunity for continuous improvement and powering an organization to be more relentlessly responsive in a way that will help ultimately maximize the C3 imperative to achieve the optimal intersection of competitiveness, compliance, and conscious capitalism.
Let’s bring to life how this might work with a portfolio of use cases.
  1. Talent Management—Healthcare US: A healthcare organization based in the Midwestern region of the United States might want to understand how things are going with their DEI imperatives. This can best be understood by looking at their talent portfolio demographics by role and then comparing how they look in terms of gender and racial percent contribution vs. their peer competitors and also what the general population looks like who qualify for that particular role and could be recruited and considered for it. They might also want to analyze the population they serve in more detail from a public health standpoint, understanding how their team looks by specialty area (e.g., primary care, cardiology, oncology) and role versus the major disease categories impacting their community (e.g., diabetes, heart disease, cancer).

  2. Consumer Risk Scoring—Banking Europe: Consider a pan-European banking operation with a large consumer credit card business. Leaders would want to begin by looking at their consumer portfolio demographics, both in terms of the full applicant pool as well as those who were accepted and denied to determine if they are anomalously high or low by race or gender. They would also want to evaluate the interest rate pricing, or appraisals might vary materially based on race, gender or another relevant demographic trait. By evaluating all stages of the funnel, they could determine how their business aligns to the general population or market they’re targeting from a demographic standpoint. They would also benefit from evaluating how their employees compare to their customers and the broader community from a demographic standpoint to determine if those representing the bank and engaging with clients are reflective of who they serve.

  3. AI System QA—Enterprise Technology APAC: Let’s take a fast growth tech organization based in Asia producing technology platforms that capitalize on AI. Regulations related to responsible AI require testing models to determine if they are yielding efficacious results before applying them to hundreds or even thousands of businesses around the globe, taking multiple sample customers and having them validate and monitor their output versus the peer/competitor data and broader population early and often.

  4. Fan Engagement—Professional Sports US: Increasingly, professional sports teams are realizing the fan base they are engaging in is not reflective of the rich diversity in their communities, in terms of gender, race/ethnicity, education/income, and other relevant demographic attributes. Seeing how their actual demographics look versus the larger population and also how they compare to peer teams in their league can help them start to evolve with programs that will enrich their community while driving to better results from a financial performance standpoint.

  5. Digital Marketing Segmentation & Customer Management—Consumer Products: All types of organizations are capitalizing on AI-based technology to segment their target prospects globally, drive automated engagement based on scoring models, and ultimately manage their customers through different types of activities once they are in the mix. While digital marketing can be a powerful tool for businesses to drive top-line growth, bottom-line productivity gains, and a better experience for customers, the models that underpin them can impose a risk of bias if not validated carefully for disproportionate impact in terms of race, gender, and other relevant demographic attributes.
For all of these examples and many others where AI is being applied to various organizational use cases that might have an adverse impact on a stakeholder population in some form, this simple starting point of gaining visibility to the “Are we good?” question can be quite powerful. Moreover, gaining clarity and transparency can motivate action and focus initiatives, fundamentally changing the mindset of an organization and how it operates in a way that is straightforward yet impactful.
Beyond achieving that initial view, the next opportunity lies in more dynamic monitoring of the same factors explored in the initial view, but through a more integrated approach that provides a continuous read-out of what is aligned, what might be veering into a risky state, what the target should be, and what the organization should change or pursue to achieve it. Achieving this next step requires a bit more investment in integration and monitoring, but can yield a strong, positive impact on the organization from a compliance and risk management standpoint as well as a continuous improvement perspective. Most significantly, tighter integration places the approach at the core of other operational systems and applies more seamlessly across the entire organization, providing dynamic views into performance and tradeoffs that should be made.
Next, the organization may want to embark on some level of exploration and experimentation with their model outcomes versus alternative approaches, running both in parallel with controlled groups and comparing the demographic portfolios of the two over time vs. the peer/competitor group and the general population at-large. That comparison might provide meaningful insight and a paradigm shift in the way that the organization looks at its stakeholder groups, achieving more of a portfolio optimization approach to managing talent, customers, markets, and suppliers. While many investors are continuously seeking their own unique “Efficient Frontier” of risk-reward tradeoffs related to their portfolio allocations, a similar concept could be applied to evaluating the efficacy of stakeholder portfolios based on the most important impact metrics. Moreover, this kind of dynamic analysis could help organizations achieve the best possible version of their C3 imperative in a way that becomes central to the way they operate.
Let’s consider how this “Efficient Frontier” modeling might play out for two different industry sectors—healthcare and banking. With healthcare, this approach could prove to be compelling as organizations struggle to balance several complex outcomes simultaneously, particularly related to overcoming staffing shortages while minimizing rising costs and maximizing diversity, care quality metrics, and patient satisfaction scores. In the consumer banking world, the model might be used by organizations to better understand the communities or markets they address and then aligning people, processes and technology to optimize for three key variables—maximizing diversity of the human portfolio demographics while minimizing default rates and growing interest rate-related revenue. Moreover, by widening the aperture of how organizations view themselves in the context of the world in which they operate, particularly by combining demographic insights with traditional financial performance metrics, an opportunity exists to mitigate and manage risks while unlocking new levels of sustainable growth.
The three initiatives described thus far—stakeholder portfolio analysis and benchmarking; dynamic system monitoring for stakeholder impact and alignment; and experimentation via parallel modeling and efficient frontier optimization across target metrics—individually or together can also provide a strong foundation for stakeholder reporting, either for ESG scoring models, various regulatory agencies such as the SEC, boards, or other relevant stakeholder groups. Having a programmatic way to capture this kind of information demonstrates a strong commitment to getting these important imperatives right—i.e., DEI, ESG, and the broader “getting to good” imperative. Many organizations are endeavoring to figure this out now, but most are struggling with manual processes that are onerous, disconnected, inconsistent, and incomplete while also imposing unnecessary cost, complexity, and risk on internal teams. Meanwhile, firms are also facing credibility challenges and skepticism from the outside that is further exacerbating the pressures cited earlier. All of these factors can put organizations on the defensive, distracted from competing effectively in a fast changing environment and precipitating an unhealthy organizational dynamic that can be tough to overcome.
Last but not least, the scarce skills challenge around all things AI creates an opportunity for a more effective publish and subscribe model—i.e., a centralized, cross-functional team purpose-built to help organizations access impossible-to-find talent. This interdisciplinary team would include technical, business, social, ethical and legal domain that could help organizations on their journey to maximizing the C3 imperative. Through meeting with cross-functional stakeholders internally to understand their objectives and challenges, the experts would collaborate on thinking through the most important system dynamics impacting its stakeholder groups. They could also provide support around achieving robust testing data sets that can be excessively expensive and onerous to achieve while also advising on algorithmic modeling or a/b testing that can yield dramatic improvements in impact and performance metrics while mitigating risks.

Section Five

Conclusion:

AI is inevitable… do you want to control your destiny?

Embracing the paradox mindset and focusing on balancing core performance metrics, compliance, and conscious capitalism can unlock creativity, agility, and innovation, yielding competitive advantages in a post-pandemic era.

Conclusion

Controlling your AI Destiny

Now is the time to determine if you are going to play defense or offense, whether you get ahead of this challenge and opportunity and be intentional about what “good” looks like for your organization and its stakeholders—or let someone else define it for you. An actionable opportunity exists to balance your core performance metrics that will keep you competitive, stay compliant with regulatory mandates that could otherwise sabotage all of your hard work, and achieve a new level of conscious capitalism that can power your organization in new, innovative ways.
Study after study by scientists and psychologists have found that individuals and organizations that are willing and able to embrace opposing demands realize greater creativity, agility, innovation, productivity, and ultimately growth.

What’s known as the “paradox mindset,” or the ability to engage in dual constraints or seemingly opposing forces, can become the most powerful catalyst for unlocking new levels of enhanced performance.

The key to achieving this mindset—both individually as a leader and across your broader organization—is intentionality and investment. While this journey of discovery is not for the faint of heart, it will yield a competitive advantage for those willing to make a paradox mindset part of their core operating model with a combination of open hearts and minds, relentless collaboration, patience, entrepreneurialism and resilience.35
“Fortune favors the bold, brave and strong” (Latin proverb), and now is no exception as we stand on the denouement of the Covid-19 pandemic and the start of a new era full of promise and peril. Let’s choose our fate and drive our organizations and the global interconnected economy to what they can and should be.

Contributors:

Christina Van Houten
Author

Christina is a veteran of the enterprise technology sector, leading strategy, product, and M&A for Oracle, IBM, Infor/Koch, and Mimecast. She serves on the Board of Directors for TechTarget and is involved as an advisory board member for several emerging technology firms and national non-profit boards. Currently, Christina is launching a new enterprise technology platform called Equity Quotient (EQ), which is focused on helping organizations monitor and manage stakeholder bias in their AI-based systems while driving new levels of productivity and growth. She earned a BA in Government and Theology from Georgetown University and attended the University of Chicago Booth School of Business where she received an MBA in Business Strategy. Christina now resides in Boston with her husband and two teenage sons, but recently reestablished roots in her hometown of Tulsa, Oklahoma.

Danielle Rose Fisher
Author

Danielle is a New York-based illustrator, designer, and overall creative professional. Working as a full-time designer & illustrator for the last 10+ years, she has had the opportunity to develop a well-rounded background of all things creative. Her philosophy is to create engaging and accessible user experiences through design and create beautiful things through illustration.

Dee-Dee Strickland
Copy Editor

Dee-Dee Strickland, of Durham, N.C., is a newspaper veteran with 30 years of experience in traditional and digital media. A graduate of Carlton College and University of North Carolina, Dee-Dee has held editorial positions with the Saint Paul Pioneer Press, Durham Herald-Sun, and Charlotte Observer, among others.

Endnotes:

  1. Grand View Research, “AI Market Size, Share & Trends Analysis Report BY Solution, By Technology (Deep Learning, Machine Learning, Natural Language Processing, Machine Vision), By End Use, By Region, and Segment Forecasts …2021- 2028,” June 2021.
  2. McKinsey Global Institute, “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy,” Jacques Bughin, Jeongmin Seong, James Manyika, Michael Chui, Raoul Joshi, September 2018.
  3. Harvard Business Review, “AI Skyrocketed Over the Last 18 Months,” Joe McKendrick, September 2021. 
  4. Bloomberg US Edition, “Wells Fargo Rejected Half Its Black Applicants in Mortgage Refinancing Boom,” Shawn Donnan, Ann Choi, Hannah Levitt, Christopher Cannon, March 11, 2022.
  5. WUSA9, “New report shows home appraisal bias is widespread; contributes to wealth gap,” Larry Miller, March 23, 2022.
  6. “High-Income Black Homeowners Receive Higher Interest Rates Than Low-Income White Homeowners,” Raheem Hanifa, Harvard University Joint Center for Housing Studies, February 16, 2021.
  7. Harvard Business Review, “AI Can Help Address Inequity—If Companies Earn Users’ Trust,” Shunyuan Zhang, Kannan Srinivasan, Param Vir Singh, and Nitin Mehta, September 17, 2021.
  8. Reuters, “Amazon scraps secret AI recruiting tool that showed bias against women,” Jeffrey Dastin, October 10, 2018. 
  9. Accenture & Harvard Business School, “Hidden Workers: Untapped Talent,” Joseph B. Fuller, Manjari Raman, Eva Sage Gavin, Kristen Hines, September 2021.
  10. Healthcare Finance, “Study finds racial bias in Optum algorithm,” Susan Morse, October 25, 2019.
  11. WIRED, The Pain Algorithm, Maia Szalavitz, September 2021.
  12. Vice, “How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud,” Gabriel Geiger, March 1, 2021.
  13. MIT Technology Review, “AI is sending people to jail—and getting it wrong,” Karen Hao, January 21, 2019. 
  14. OurOffice, “The New ROI: Return on Inclusion,” Sonya Sepahban, August 4, 2020.
  15. HBR, “New AI Regulations Are Coming, Is Your Organization Ready?,” Andrew Burt, April 30, 2021.
  16. Bloomberg, “ESG Assets Rising to $50 Trillion Will Reshape $140.5 Trillion of Global AUM by 2025, Finds Bloomberg Intelligence,” Bloomberg Intelligence, July 21, 2021.
  17. Leibniz Institute for Financial Research SAFE, “The Power of ESG Ratings on Stock Markets,” Carmelo Latino, Loriana Pelizzon, Aleksandra Rzeznik, 2021.
  18. Rockefeller Asset Management and NYU Stern Center for Sustainable Business, “ESG & Financial Performance: Uncovering the Relationship by Aggregating Evidence from 1,000+ Studies Published Between 2015 and 2020,” Tensie Whelan, Ulrich Atz, Tracy Van Holt, and Casey Clark, CFA, 2021.
  19. 3BL/CSR Wire, “For Private Equity—ESG Momentum Is Peaking,” February 16, 2022.
  20. Ibid.
  21. Private Equity International, “Making ESG Part of the Pay Day,” Nicholas Sehmer,” February 1, 2022 
  22. Paul Weiss ESG Thought Leadership, “ESG Ratings and Data: How to Make Sense of Disagreement,” January 29, 2021; The SustainAbility Institute by ERM, “Rate the Raters 2020,” Christina Wong, Erika Petroy.
  23. Principles for Responsible Investment, “Regulation database update: the unstoppable rise of RI policy,” Hazell Ransome, March 17, 2021.
  24. Intelligize, “Pressure Builds for Better ESG Leadership at Companies,” Erin Connors, June 2021. 
  25. The Washington Post, “Corporate America’s $50 billion promise,” Tracy Jan, Jena McGregor and Meghan Hoyer, August 23/24, 2021.
  26. Best Colleges, “Starting a Career in Artificial Intelligence,” Reece Johnson, December 15, 2021. 
  27. Oracle AI & Data Science Blog, “Study: What are the requirements for data scientist jobs in 2020?” Nedko Krastev, October 22, 2020.
  28. PROLEGO, “Product Manager is the hardest AI position to fill,” Kevin Dewalt, July 13, 2017.
  29.  TechRepublic, “96% of organizations run into problems with AI and machine learning projects,” Macy Bayern, May 24, 2019.
  30. Towards Data Science, “Why 90 percent of all machine learning models never make it into production,” Rhea Moutafis, November 8, 2020.
  31. Emil Walleser, “Artificial Intelligence Has an Enormous Carbon Footprint…” Towards Data Science, July 14, 2021. 32 US Bureau of Labor Statistics, “Economic News Release: Employment Situation Summary,” September 3, 2021. 33 Harvard Business School & Accenture, “Hidden Workers: Untapped Talent,” Joseph B. Fuller, Manjari Raman, Eva Sage Gavin, Kristen Hines, September 2021.
  32. Weapons of Math Destruction, Cathy O’Neil, 2016
  33. Worklife, “Why the ‘paradox mindset’ is the key to success,” Loizos Heracleous and David Robson, November 11, 2020.