ArticlePDF Available

AI x I = AI2: The OD imperative to add inclusion to the algorithms of artificial intelligence

Authors:

Abstract

This article details concerns about the potential of machine learning processes to incorporate human biases inherent in social data into artificial intelligence systems that influence consequential decisions in the courts, business and financial transactions, and employment situations. It details incidents of biased decisions and recommendations made by artificial intelligence systems that have been given the patina of objectivity because they were made by machines supposedly free of human bias. The article offers suggestions for addressing the systemic biases that are impacting the viability, credibility, and fairness of machine learning processes and artificial intelligence system.
By Frederick A. Miller,
Judith H. Katz, &
Roger Gans
“The addition of a new class of worker, driven by AI, promises to challenge the path to greater inclusion
by having the potential to exponentially increase disruption not just in organizations but in society,
government, and our everyday lives.”
AI I AI2
The OD Imperative to Add Inclusion to the
Algorithms of Artificial Intelligence
Since its beginnings, one of the functions 
of OD has been to create organizations that 
enable people to do their best work and 
to create workplaces built on principles 
of democracy and participation. A major 
element of creating such workplaces is 
identifying and ameliorating discrimina-
tory practices and cultures in organiza-
tions. Artificial intelligence (AI) is in the 
process of complicating and confounding 
that function in ways we may not have 
seen coming. There are growing concerns 
about human (and other) biases being built 
into the machine-learning algorithms that 
are increasingly impacting our organiza-
tions, their processes, and our lives. But 
just as AI has the potential to reify and 
magnify the effects of human bias, it also 
offers unprecedented opportunity to build 
inclusive practices into the fundamental 
practices and processes of organizations.
As the following will show, it is clear 
that responsible AI developers must find 
ways to incorporate awareness of the 
potential for bias and the value of inclusion 
into the algorithms that guide machine 
learning processes. But our experience sug-
gests that if the developers of AI systems 
hope to eliminate discrimination and build 
inclusion into their software, they first will 
need to do those things with their own 
culture. In addition, those who are work-
ing in organizations need to be mindful of 
the potential for bias in such processes and 
software so they can insure the processes 
being implemented are not contributing 
to biases that may already exist within the 
workplace. In this article we discuss some 
of the dangers and opportunities presented 
by AI, and the implications for organiza-
tions, the people of those organizations, 
and OD practitioners tasked with assisting 
them to survive and thrive. 
A New Class of Worker Brings Danger
and Opportunity
Organizations have been, and continue 
to be, disrupted and transformed by the 
addition of women, people of color, people 
from different countries and ethnic origins, 
people with different sexual orientation 
and gender identities, and differently-abled 
people into the workforce and workplace. 
Organizations that have learned to leverage 
6OD PRACTITIONER Vol. 50 No. 1 2018
the added skillsets and perspectives of their 
increasingly diverse workforces through 
building cultures of inclusion have experi-
enced significant gains in productivity and 
profitability (Katz & Miller, 2017; Miller & 
Katz, 2002; Page, 2007). The addition of a 
new class of worker, driven by AI, promises 
to challenge the path to greater inclusion 
by having the potential to exponentially 
increase disruption not just in organiza-
tions but in society, government, and our 
everyday lives. 
Robots and other machines powered 
by computerized algorithms are already 
working alongside humans in factories 
around the world. Some, with self-pro-
gramming machine-learning capabilities, 
are performing customer service functions, 
implementing marketing strategies, and 
making consequential decisions that can 
determine the opportunities we see, the 
jobs we get, the products we buy, the prices 
we pay, and the treatment we receive from 
officers of the law and the courts. Robots 
are the visible manifestations of artificial 
intelligence—the hands and feet of AI. 
Many of the manifestations and influences 
of AI are less visible, however, and some of 
these are proving to be problematic.
What is Artificial Intelligence Learning
from Humans?
Although feared by some, the great hope 
of many people was that AI would give us 
faster, wiser, fairer decisions and actions 
without the downsides of human error, 
fatigue, or bias. Through the magic of 
machine learning, it would speed customer 
service transactions, unstick the gridlock of 
governmental and organizational bureau-
cracies, eliminate traffic jams, improve 
medical diagnoses and treatments, and 
relieve us of the burden of countless bor-
ingly repetitive tasks. 
But who is teaching the machine? 
And once activated, what will the machine 
teach itself and other machines, especially 
if what it learns is based on human history, 
the content of the Internet, and the biases, 
fears, and unexamined assumptions of its 
coders, programmers, and model build-
ers? Many OD practitioners are trained to 
identify manifestations of bias, oppression, 
and discrimination in organizational 
systems and culturally influenced data. 
But the program developers who write 
the algorithms that drive the machines 
rarely receive such training (Mundy, 2017). 
Without such knowledge, they can overlook 
the danger that the data used to inform 
the AI machine-learning process may have 
culturally determined biases already baked 
in. For example, AI-driven risk-assessment 
tools currently in use in some places sift 
through racially biased arrest records and 
historical crime data to help courts make 
decisions and police departments deter-
mine which neighborhoods should receive 
greater scrutiny and coverage. In doing so, 
they are actively reflecting, perpetuating, 
and magnifying racial inequities caused by 
societal prejudice (Crawford, 2016). 
Bias Is Baked into the Data
It is too late to merely worry that human 
biases might cross over into the computer-
ized programs affecting many individual 
lives and organizational functions. Our 
biases are baked right into our language 
and the language-usage data AI systems 
learn from (Caliskan, Bryson, & Narayanan, 
2017). To cite a readily observable phe-
nomenon, AI-driven language translation 
tools routinely add gendered stereotypes in 
translating from gender-neutral languages:
Google Translate converts these 
Turkish sentences with gender-
neutral pronouns: “O bir doctor. O bir 
hemşire.” to these English sentences: 
“He is a doctor. She is a nurse.” We 
see the same behavior for Finnish, 
Estonian, Hungarian, and Persian in 
place of Turkish. Similarly, translat-
ing the above two Turkish sentences 
into several of the most commonly 
spoken languages (Spanish, English, 
Portuguese, Russian, German, and 
French) results in gender-stereotyped 
pronouns in every case (Caliskan et 
al., 2017).
In 2015, Google’s photo app—powered 
by AI and machine learning processes— 
identified black people in some photos 
as gorillas (Barr, 2015). That same year, 
a Carnegie Mellon University study 
determined that AI-driven, search-based 
advertising promising employment 
assistance for obtaining high-paying 
jobs—for $200,000 and higher—targeted 
significantly fewer women than men 
(Spice, 2015). 
In bail and sentencing hearings in 
courtrooms across the U.S., AI-driven 
software systematically—and mistakenly—
rates black people as higher recidivism 
risks than white people (Angwin, Larson, 
Mattu, & Kirchner, 2016). Based on AI-
driven calculations, insurance companies 
routinely charge residents of zip codes 
with large minority populations up to 30% 
more than residents from whiter neighbor-
hoods with similar accident costs (Angwin, 
Larson, Kirchner, & Mattu, 2017). 
Outcomes like these violate our expec-
tations. We assume machines must be 
inherently fair and objective, that they can-
not help but analyze data without bias or 
malice. But it is easy to forget that the pro-
gramming that drives the way AI analyze 
data is originally created by humans. The 
people who create the algorithms belong 
to an industry culture that has bias against 
women and African Americans, even if 
based solely on their conspicuous under-
representation (Clark, 2016; Mundy, 2017). 
Undoubtedly, few programmers would 
intentionally embed bias in their work, but 
it is hard to address problems you do not 
see, and impossible to avoid doing things 
you do not even know you are doing. Racist 
and sexist assumptions are ingrained in the 
wider societal culture, and perhaps even 
more so in the tech industry subculture 
(Mundy, 2017; Tiku, 2017). 
Computers Learn Bias the Same Way
People Do
Machine learning is a process by which 
computers sift through and process 
enormous amounts of data with a goal of 
identifying underlying patterns in the data, 
which is basically the same way humans 
learn (Emspak, 2016). In both cases, the 
results are most often used to predict 
future actions and behaviors. For early 
human learning, the prediction can involve 
what kinds of vocalizations and facial 
expressions are most likely to elicit a hug, 
7AI x I = AI2: The OD Imperative to Add Inclusion to the Algorithms of Artificial Intelligence
food, or a diaper-change. For a machine-
learning computer, the prediction is likely 
to involve which humans to target for prod-
uct advertising and which advertising mes-
sages are most likely to produce sales, but 
it can also involve who to loan money to, 
who to hire, who to promote, and who are 
the greatest risks for committing crimes or 
appearing for trials.
Humans start processing data as 
infants, and we learn the expectations of 
our society from the actions and words of 
all the people with whom we come into 
contact. If there are biases in our upbring-
ing, we can sometimes learn to overcome 
them if we consciously decide to do so. We 
can learn to identify patterns of unfair-
ness and discrimination in other people’s 
attitudes and behavior, and we can seek 
out additional sources of information to 
fact-check biased claims and act to cor-
rect them. But with up to 98% of our own 
attitudes and decisions arrived at through 
unconscious processes, it is harder to 
identify the biases we hold implicitly (Sta-
ats, Capatosto, Wright, & Jackson, 2016). 
Without training and vigilance, AI pro-
grammers and model-builders cannot help 
but perpetuate these implicit, unconscious 
biases in their work.
In machine learning, computers can 
only process the data they receive, and 
they may be restricted to considering 
only specific facets of that data as part of 
their initial human-sourced program-
ming. Add in the fact that virtually all data 
available for analysis, including language 
itself, has roots in human perception and 
interpretation, and it becomes clear that 
bias in machine learning is inevitable. 
Like a child, a machine-learning computer 
builds its vocabulary and “intelligence” 
through pattern recognition (Bornstein, 
2016)—for instance, in how often terms 
and value judgments appear together on 
the Internet and other sources (Caliskan, et 
al., 2017). The word “nurse” is vastly more 
often accompanied by female gendered 
pronouns than by male gendered pro-
nouns. African-American names are often 
surrounded by words that connote unpleas-
antness because people on the Internet say 
awful things, not because African Ameri-
cans are unpleasant. 
Prejudices produce actions that, 
in turn, produce data. For instance, it 
is widely acknowledged that arrest and 
incarceration data reflect societal biases 
against people of color, a pattern that is 
readily seen in the way drug laws have 
been enforced. While whites and African 
Americans are equally likely to use illegal 
drugs (Lopez, 2015), African Americans are 
roughly three times as likely to be arrested 
and prosecuted for possession of illegal 
drugs (Common Sense for Drug Policy, 
2014). A similar skewing of “objective” 
data can be seen in percentages of women 
serving on corporate boards and in senior 
management positions (Warner, 2014). 
Without specific instructions to consider 
these kinds of patterns as evidence of bias, 
machine-learning computers are likely to 
use these data to predict that African Amer-
icans are three times as likely as whites to 
be carrying illicit drugs (which can be used 
as a justification for racial profiling and 
stop-and-frisk practices), and that women 
lack certain leadership qualities.
Don’t Ask, Because We Can’t Tell
Because machines are assumed to be fair 
and unbiased, machine-produced predic-
tions, and the resulting recommendations 
and decisions, are less likely to be ques-
tioned as biased than if they had come 
from human agents (The AI Now Report, 
2016). Not only is it less likely a machine’s 
decision will be questioned, its decision is 
also significantly harder to question than 
a human’s. AI-developers such as Google 
and Amazon consider their algorithms to 
be proprietary information, and they pro-
tect them vigorously. Moreover, particularly 
in advanced machine-learning systems, the 
details of any individual prediction may 
be based on literally billions of individual 
digital processes and, as such, are opaque 
even to the original coders (Bornstein, 
2016; Knight, 2017). In other words, while 
humans may be asked to account for and 
justify what seem like biased decisions, 
machines may not be able to provide 
such explanations—and neither will their 
creators.
1
Companies that offer AI services to 
other companies may tout the speed and 
capability of their processes, but unless 
they offer transparency in the development 
of their algorithms and the training of 
their people, there is no way for their client 
organizations to know if the AI package 
includes baked-in biases. OD practitioners 
working to eliminate institutionalized 
“isms” in organizational interactions and 
systems need to be aware of the potential of 
AI to institutionalize those “isms” in ways 
1. The European Union's General Data Protection
Regulation (GDPR), which goes into effect in May
2018, is meant to protect the right of individuals to
know how their personal data is used. There is a
view that the GDPR includes a "right of explana-
tion" as to how outputs are generated from machine
learning models. If true, companies that are build-
ing these models may need to demonstrate that
they have removed bias from those outputs. More
information is available at: http://www.eugdpr.org/
Companies that offer AI services to other companies may tout
the speed and capability of their processes, but unless they
offer transparency in the development of their algorithms and
the training of their people, there is no way for their client
organizations to know if the AI package includes baked-in
biases. OD practitioners working to eliminate institutionalized
“isms” in organizational interactions and systems need to be
aware of the potential of AI to institutionalize those “isms” in
ways that are much harder to detect, challenge, or change.
OD PRACTITIONER Vol. 50 No. 1 20188
that are much harder to detect, challenge, 
or change.
Bias In, Bias Out:
Coder Culture Resists Change
As detailed above, AI-driven decision-mak-
ing processes can produce biased outcomes 
that reflect the same sets of “isms” OD 
practitioners and others have been work-
ing to ameliorate for decades. The evidence 
suggests that if the biases exist in the 
wider society, they will be “learned” by AI 
systems that use the collective behavior and 
data of the wider society to learn from. 
This would be less of a problem if 
the programmers writing the algorithms 
on which machine-learning systems run 
were more aware of the biases that exist in 
the wider society, and by extension, in the 
data sets produced by that society. Greater 
awareness would make them better able 
to ensure their coding efforts include 
strategies for identifying patterns of bias in 
societally-influenced data and safeguards 
against existing, documented biases. 
Making such awareness more normative 
within the tech industry will be a challeng-
ing undertaking. Of course, as might be 
expected in the tech industry, “there’s an 
app for that,” with a proliferation of anti-
bias apps and training workshops that try 
to reduce bias itself to an algorithm. But 
there continues to be unwillingness among 
some tech companies to change core parts 
of their culture (Mundy, 2017). 
Celebration of the tech industry’s 
coding community as an elite, exclusive, 
meritocratic club seems to be a deeply 
entrenched ethos, sometimes defended by 
claims that the sparse numbers of women 
and African Americans are a consequence 
of a reluctance to “lower our standards” 
(Mundy, 2017). Racial stereotyping is a 
well-acknowledged problem within the 
software industry (Tiku, 2017). Gender 
stereotyping, in contrast, seems to attract 
more attention as well as greater pushback 
when attempts are made to address it 
(Wakabayashi, 2017). In recent years, the 
tech industry has produced an increasing 
number of reports on their companies’ 
diversity numbers, but little in the way of 
positive change in those numbers or the 
cultures that have produced and sustained 
them. Studies have shown that women 
leave the tech industry at twice the rate that 
men do, and that the percentage of com-
puter science degrees earned by women 
has decreased from 37% in 1984 to 18% in 
2014 (Alba, 2017). Some diversity educa-
tion programs at tech companies have 
seemed to produce boomerang effects, with 
declines in diversity at some of the compa-
nies in which such programs were enacted 
(Alba, 2017).
Not Just a Tech Issue:
AI’s Expanding Presence
It may seem tempting to focus warnings 
about bias and discriminatory implications 
of AI solely on the tech industry, but AI-
driven processes and services are already 
part of the routine experience of everyday 
life inside organizations of all sizes in all 
industries. (How many times have you 
Googled something today?) In fact, people 
in organizations outside the tech indus-
try are even less likely to question the 
algorithms and machine-logic on which 
AI-influenced decisions are made than 
within the tech industry. Without a keen 
awareness of the potential for baked-in bias 
in their AI-driven systems, some organiza-
tions are at risk of inadvertently becoming 
party to actions that have a discriminatory 
effect on their customers or their team 
members, with potentially dire bottom-
line consequences in either case. This may 
already be influencing hiring practices, 
in which AI is increasingly used in talent 
sourcing and acquisition. AI is being used 
to make the candidate-selection process 
faster and more efficient (Alsever, 2017), 
and to root out human biases (Captain, 
2016), but because it relies on human-pro-
grammed choice trees and human-gener-
ated data in deciding which candidates are 
the best “fits,” the process also can rule out 
some of the diversity organizations are—or 
ought to be—seeking (Ghosh, 2017).
There is an upside in all this for those 
seeking to address issues of inclusion and 
diversity in organizations, however. The 
potential for bias in AI systems can actu-
ally be a useful tool for OD practitioners. 
By raising concerns about machine-based 
biases in organizational practices, we may 
also be able to raise awareness of how 
unconscious bias is carried like an “equal 
opportunity virus” (Dasgupta, 2013) by all 
the humans of the organization. Consid-
ering its effects, of course, bias might be 
more accurately considered an “un-equal 
opportunity virus.”
AI and OD:
What’s Around the Corner
The rippling effects of AI promise to 
impact virtually all facets of organizational 
life, from decisions about who to hire and 
promote, to design and marketing of prod-
ucts and services, to each organization’s 
competitive position and reputation in the 
global marketplace. Instead of disregard-
ing it as too technical for our purview, OD 
practitioners need to see AI as a critical 
element of the organization that needs to 
be analyzed and addressed in regard to 
its effects on institutionalized “isms” and 
people’s ability to do their best work. 
There are more implications for the 
role of OD in addressing issues of AI than 
can be covered in any single article. Some 
of the AI-related issues OD practitioners 
should anticipate facing include:
AI-fueled entrepreneurship. As access to 
the tools of AI becomes more widespread, 
it is likely to spur the growth of entrepre-
neurial start-ups that focus on applying the 
potential of AI to solve an ever-widening 
array of personal and commercial needs 
(Lee, 2017). The role of OD will be criti-
cal in assisting these start-ups to avoid the 
toxic-culture missteps of tech start-ups like 
Uber (Noguchi, 2017) and SoFi (O’Connor, 
2017). 
Worker disruption and displacement.
Robots powered by AI-systems are already 
replacing people in manufacturing plants, 
warehouses, banks, and supermarkets 
throughout the world. Other types of jobs 
will inevitably be replaced or displaced as 
AI systems become more sophisticated. 
Challenges for the practice of OD are likely 
to include working to create a culture that 
enables people to work effectively with 
robots and advanced AI: How will workers
9AI x I = AI2: The OD Imperative to Add Inclusion to the Algorithms of Artificial Intelligence
react and relate to non-human co-workers?
Will work teams accept an AI as a team-
mate or an agent of management? OD 
practitioners will almost certainly need to 
prepare the organization and its people for 
widespread role-changes and potentially 
stressful rounds of outplacement and 
downsizing. The shapes of the changes to 
come are difficult to predict, but preparing 
organizations and the people in them for 
inevitable and increasingly rapid AI-related 
change is a necessity. 
How to Add Inclusion to the AI Algorithm:
AI x I = AI2
To address the issue of bias in AI, it will 
be essential to address the culture of the 
coders as well as the code. Following are a 
few suggestions for changing the culture of 
the tech industry to be more inclusive and 
more aware of the potential for bias in its 
members and their code.
A strategy for creating culture change
within tech organizations and among cod-
ers. Before AI model builders—and those 
working in partnership with them—can be 
expected to root out biases and inequities 
from their algorithms and AI-based prod-
ucts, they will need the competence and 
capability to recognize and address those 
biases and inequities. They will also need 
to accept that those biases and inequities 
are real, harmful, and consequential. Any 
efforts to address the prevailing practices 
and mindsets of the tech industry in this 
regard must start with awareness that some 
aspects of coder-culture have deep-seated 
resistance to change, as noted above. The 
following elements might better position 
such a culture-change strategy for success.
Education. This may be an occasion 
to brandish Churchill’s “those who fail to 
learn from history are doomed to repeat it.” 
Claims regarding “lowering our standards” 
were exposed decades ago as pretexts for 
excusing the exclusion of women, people 
of color, and other undesirables (Cross, 
Katz, Miller, & Seashore, 1994). It will be 
vital to help those involved with AI to gain 
greater competence in recognizing bias in 
themselves and societally produced data. 
Although many organizations are doing 
education/training on unconscious bias, 
that alone will not solve this issue. It has to 
go beyond personal awareness to scrutiny 
of how the data itself may be reflecting 
biases and therefore to reimagine how to 
use AI’s data-crunching abilities to avoid 
perpetuating longstanding patterns of 
discrimination. 
Education in this direction could 
include exposing tech industry members 
to evidence of their own biases, as well 
as documentation of biases in the data 
used in machine-learning applications. 
Motivation for change could be addressed 
with additional education regarding the 
value-added and return-on-investment 
of inclusive practices (e.g., Katz & Miller, 
2017; Page, 2007) as well as the costs 
of bias-centered lawsuits and public 
relations disasters.
Socialization. People cannot adopt a 
cultural norm of inclusive behaviors until 
they experience that norm. To accomplish 
this, it will be necessary to establish pilot 
groups that practice and model inclusive 
mindsets and actions, and to nurture these 
groups with education and organizational 
support. Ideally, these pilot groups will 
grow and eventually form the core of each 
organization’s new culture. 
Certification. A program that requires 
and provides certification of competence 
for recognizing bias and practicing inclu-
sive behaviors seems a particularly apt 
accountability tool for the software indus-
try. AI programmers could be required 
to pass multicultural competence tests or 
attend education programs that address 
bias, diversity, inclusion, and the practice 
of self-as-instrument. They might also 
undergo periodic recertification processes 
that could include 360-degree reviews from 
a diverse group including their team lead-
ers, colleagues, and direct reports.
A strategy for overseeing code quality and
addressing grievances. Because of the 
specialized nature of this field, few people 
possess the competence to recognize 
defects or flaws in computer programs, and 
fewer can trace potential problems with the 
deep processes involved in machine learn-
ing. This has created problems with regard 
to accountability and redress of issues that 
affect people’s lives and livelihoods, and 
suggests a need for creation of at least two 
sets of human-staffed resources:
Organizational and industry-wide
peer-review boards. To protect the integrity 
of the organizations producing the code, 
there needs to be a process for some of 
the AI-based products to have their code 
(and the results of pilot runs for deep-
process machine-learning applications) 
reviewed by an independent diverse panel 
of experts before being released into the 
public sphere.
Organizational and industry-wide AI
grievance panels. It should be assumed 
that AI applications will produce unex-
pected and unintended inequities. Each 
organization that produces AI-based 
products could establish a standing panel 
to address grievances from consumers 
and others affected by their products, 
either directly or indirectly. For consum-
ers who are not satisfied with the redress 
given them by the manufacturing orga-
nization, there could be an industry-wide 
For practitioners of OD, the challenge will be not just to assist
organizations to recognize and address the inherent dangers
presented by AI, but also to recognize the potential of AI to
integrate inclusive algorithms into the fabric of their existence.
Today, our task is to identify and root out the biases and
inequities of human society that are being absorbed through
machine-learning processes and presented as objective and
unquestionable reality. This is no small task! However, we would
be remiss if we did not also address the positive potential of AI.
OD PRACTITIONER Vol. 50 No. 1 201810
appeals panel that would hold organiza-
tions accountable.
A strategy that requires immediate action.
Regardless of the industry, OD practitio-
ners cannot wait for a world-changing 
robot apocalypse to sound the alarm or to 
start addressing the issues of AI. We need 
to be mindful that this is happening now, 
and at a pace that is accelerating. We can-
not settle for a “let the buyer beware” mar-
ket for AI products. We must enable the 
buyer to beware when the organizations 
we support are purchasing such products 
until we are sure anti-bias safeguards are 
in place and the awareness of the program-
mers and sellers is at a level that they 
have made their products “safe” for our 
diverse world. 
We need to be willing to get into the 
messy work of understanding how bias is 
being built into these systems. We need to 
be willing to venture outside our com-
fort zones in questioning the fitness and 
objectivity of algorithms we may not have 
the technological savvy to understand, 
but whose biased effects we can and need 
to identify.
Conclusion: This is Just the Beginning
Whether you believe AI has the potential 
to create an Eden-like utopia (Lee, 2017) or 
bring about the extinction of humankind 
(Dowd, 2017) or something in between, 
it is clear that AI will exert greater and 
greater influence over virtually all aspects 
of individual and organizational life (The 
AI Now Report, 2016). For practitioners of 
OD, the challenge will be not just to assist 
organizations to recognize and address the 
inherent dangers presented by AI, but also 
to recognize the potential of AI to integrate 
inclusive algorithms into the fabric of 
their existence. 
Today, our task is to identify and root 
out the biases and inequities of human 
society that are being absorbed through 
machine-learning processes and presented 
as objective and unquestionable reality. 
This is no small task! However, we would 
be remiss if we did not also address the 
positive potential of AI. Consider applying 
the power of AI to any of these “what ifs”:
»What if, instead of equating data with 
purely objective facts, AI routinely iden-
tified patterns that could be the result 
of societal or organizational biases and 
discrimination, and sounded alarm 
bells?
»What if, instead of selecting only the job 
candidates who fit our existing organi-
zation profile, AI selected an array of 
candidates who provide the perspec-
tives we currently lack?
»What if, instead of showing us only the 
news we are likely to be most interested 
in, AI showed us the news we most 
need to see to be well-rounded, respon-
sible citizens? 
These are the kinds of questions an 
inclusive, culturally competent AI coding 
and consuming community would ask 
about how AI could enhance human inter-
action. What the results might be, we can 
only imagine.
References
Alba, D. (2017, March 31). Hey tech giants: 
How about action on diversity, not just 
reports? Wired. Retrieved from https://
www.wired.com/2017/03/hey-tech-giants-
action-diversity-not-just-reports/
Alsever, J. (2017, May 19). How AI is chang-
ing your job hunt. Fortune. Retrieved 
from http://fortune.com/2017/05/19/
ai-changing-jobs-hiring-recruiting/
Angwin, J., Larson, J., Kirchner, L., & 
Mattu, S. (2017, April 5). Minority 
neighborhoods pay higher car insur-
ance premiums than white areas with 
the same risk. ProPublica. Retrieved 
from https://www.propublica.org/article/
minority-neighborhoods-higher-car-insur-
ance-premiums-white-areas-same-risk
Angwin, J., Larson, J., Mattu, S., & Kirch-
ner, L. (2016, May 23). Machine bias: 
There’s software used across the 
country to predict future criminals. And 
it’s biased against blacks. ProPublica.
Retrieved from https://www.propublica.
org/article/machine-bias-risk-assessments-
in-criminal-sentencing
Barr, A. (2015, July 1). Google mistakenly 
tags black people as ‘gorillas,’ showing 
limits of algorithms. The Wall Street
Journal, retrieved from https://blogs.wsj.
com/digits/2015/07/01/google-mistakenly-
tags-black-people-as-gorillas-showing-
limits-of-algorithms/
Caliskan, A., Bryson, J.J., & Narayanan, A. 
(2017). Semantics derived automatically 
from language corpora contain human-
like biases. Science, 356, 183–186. DOI: 
10.1126/science.aal4230 (Supplemen-
tal Materials: www.sciencemag.org/
content/356/6334/183/suppl/DC1)
Captain, S. (2016, May 18). Can artificial 
intelligence make hiring less biased? 
Fast Company. Retrieved from https://
www.fastcompany.com/3059773/
we-tested-artificial-intelligence-platforms-
to-see-if-theyre-really-less-
Clark, J. (2016, June 23). Artificial intel-
ligence has a ‘sea of dudes’ problem. 
Bloomberg Technology, retrieved from 
https://www.bloomberg.com/news/arti-
cles/2016-06-23/artificial-intelligence-has-
a-sea-of-dudes-problem
Common Sense for Drug Policy. (2014). 
“Race and Prison.” Drug War Facts.
Retrieved from http://drugwarfacts.org/
chapter/race_prison#sthash.WRkTtM10.
dpbs
Crawford, K. (2016, June 25). Artificial 
intelligence’s White Guy problem. The
New York Times, retrieved from https://
www.nytimes.com/2016/06/26/opinion/
sunday/artificial-intelligences-white-guy-
problem.html
Cross, E.Y., Katz, J.H., Miller, F.A., & Sea-
shore, E.W. (Eds.) (1994). The promise of
diversity: Over 40 voices discuss strategies
for eliminating discrimination in organi-
zations. Burr Ridge, IL: Irwin Profes-
sional Publishing.
Dasgupta, N. (2013). Implicit attitudes and 
beliefs adapt to situations: A decade 
of research on the malleability of 
implicit prejudice, stereotypes, and the 
self-concept. Advances in Experimental
Social Psychology, 47, 233–279. dx.doi.
org/10.1016/B978-0-12-407236-7.00005-X
Dowd, M. (2017, April). Elon Musk’s 
billion-dollar crusade to stop the A.I. 
apocalypse. Vanity Fair, April 2017. 
Retrieved from https://www.vanityfair.
com/news/2017/03/elon-musk-billion-
dollar-crusade-to-stop-ai-space-x
Emspak, J. (2016, December 29). How a 
machine learns prejudice. Scientific
11AI x I = AI2: The OD Imperative to Add Inclusion to the Algorithms of Artificial Intelligence
American, retrieved from https://
www.scientificamerican.com/article/
how-a-machine-learns-prejudice/
Ghosh, D. (2017, October 17). AI is 
the future of hiring, but it’s far 
from immune to bias. Quartz at
Work. Retrieved from https://work.
qz.com/1098954/ai-is-the-future-of-hiring-
but-it-could-introduce-bias-if-were-not-
careful/
Katz, J.H., & Miller, F.A. (2017). Leverag-
ing differences and inclusion pays off: 
Measuring the impact on profits and 
productivity. OD Practitioner, 49(1), 
56–61.
Knight, W. (2017, April 11). The dark 
secret at the heart of AI: No one 
really knows how the most advanced 
algorithms do what they do. That 
could be a problem. MIT Technol-
ogy Review. Retrieved from https://
www.technologyreview.com/s/604087/
the-dark-secret-at-the-heart-of-ai/
Lee, T.E. (2017, May 18). Artificial intel-
ligence is getting more powerful, 
and it’s about to be everywhere. Vox.
Retrieved from https://www.vox.
com/new-money/2017/5/18/15655274/
google-io-ai-everywhere
Lopez, G. (2015, October 1). Black and 
white Americans use drugs at similar 
rates. One group is punished more 
for it. Vox. Retrieved from https://
www.vox.com/2015/3/17/8227569/
war-on-drugs-racism
Miller, F.A., & Katz, J.H. (2002). The inclu-
sion breakthrough: Unleashing the real
power of diversity. San Francisco, CA: 
Berrett-Koehler Publishers, Inc.
Mundy, L. (2017, April). Why is Silicon 
Valley so awful to women? The Atlantic.
Retrieved from https://www.theatlantic.
com/magazine/archive/2017/04/why-is-
silicon-valley-so-awful-to-women/517788/
Noguchi, Y. (2017, June 6). Uber fires 20 
employees after sexuial harassment 
claim investigation. NPR. Retrieved 
from http://www.npr.org/sections/
thetwo-way/2017/06/06/531806891/uber-
fires-20-employees-after-sexual-harassment-
claim-investigation
O’Connor, C. (2017, September 12). SoFi 
CEO Mike Cagney resigns following 
sexual harassment lawsuit. Forbes.
Retrieved from https://www.forbes.com/
sites/clareoconnor/2017/09/12/sofi-ceo-
mike-cagney-resigns-following-sexual-
harassment-lawsuit/#6847d9b565be
Page, S.E. (2007). The empirical evidence. 
In S.E. Page, The difference: How diver-
sity creates better groups, firms, schools,
and societies (pp. 313–337). Princeton, 
NJ: Princeton University Press.
Resnick, B. (2017, April 17). How artificial 
intelligence learns to be racist. Vox,
retrieved from https://www.vox.com/
science-and-health/2017/4/17/15322378/
how-artificial-intelligence-learns-how-to-
be-racist
Spice, B. (2015, July 7). Questioning the 
fairness of targeting ads online: CMU 
probes online ad ecosystem. Carn-
egie Mellon University News, retrieved 
from http://www.cmu.edu/news/stories/
archives/2015/july/online-ads-research.
html
Staats, C., Capatosto, K., Wright, R.A., 
& Jackson, V. W. (2016). Implicit
bias review, 2016 edition. Ohio State 
University: Kirwan Institute for the 
Study of Race and Ethnicity. Retrieved 
from http://kirwaninstitute.osu.edu/
my-product/2016-state-of-the-science-
implicit-bias-review/
The AI Now Report. (2016, September 22). 
The social and economic implications 
of artificial intelligence technologies 
in the near-term. AI Now (Summary 
of public symposium). Retrieved from 
https://artificialintelligencenow.com/
media/documents/AINowSummaryRe-
port_3_RpmwKHu.pdf
Tiku, N. (2017, October 3). Why tech 
leadership has a bigger race than 
gender problem. Wired. Retrieved 
from https://www.wired.com/story/
tech-leadership-race-problem/
Wakabayashi, D. (2017, August 7). Google 
fires engineer who wrote memo 
questioning women in tech. The New
York Times. Retrieved from https://www.
nytimes.com/2017/08/07/business/google-
women-engineer-fired-memo.html
Warner, J. (2014, March 7). Fact sheet: 
The women’s leadership gap. Center
for American Progress. Retrieved from 
https://www.americanprogress.org/issues/
women/reports/2014/03/07/85457/
fact-sheet-the-womens-leadership-gap/
Frederick A. Miller and Judith H. Katz are CEO and Executive Vice President
(respectively) of The Kaleel Jamison Consulting Group, Inc., one of Consulting
Magazine’s Seven Small Jewels in 2010. They have partnered with Fortune 50
companies globally to elevate the quality of interactions, leverage people’s
differences, and transform workplaces. Katz sits on the Dean’s Council, Col-
lege of Education at the University of Massachusetts, Amherst, and the Board
of Trustees of Fielding Graduate University. Miller serves on the boards of
Day & Zimmermann, Rensselaer Polytechnic Institute’s Center for Automated
Technology Systems, and Hudson Partners. Both are recipients of the OD
Network’s Lifetime Achievement Award and have co-authored several books,
including Opening Doors to Teamwork and Collaboration: 4 Keys that Change
EVERYTHING (Berrett-Koehler, 2013) as well as a book on workplace psy-
chological and emotional safety, to be published in Fall 2018. Miller can be
reached at fred411@kjcg.com. Katz can be reached at judithkatz@kjcg.com.
Roger Gans, MA, ABD, is a writer, consultant, and educator who specializes
in strategic communication. He has been a long-time thinking and writing
partner of Miller, Katz, and KJCG. An adjunct professor in the management and
communication departments of the Sage Colleges, his doctoral dissertation
examines how pro-social advocacy campaigns can exacerbate engagement
disparities in civic affairs, health care, and the workplace. His current consult-
ing projects include promoting health care services on Eastern Long Island
(NY) and development of a youth addiction services program in Iowa. Gans
can be reached at rgans@albany.edu.
OD PRACTITIONER Vol. 50 No. 1 201812
... Train AI Developers to Detect Bias. It has also been suggested that AI developers should all receive training to make them more adept at recognizing potential sources of bias in their designs and training data, and that the industry should perhaps require AI developers to have certification of such training (Miller et al., 2018). Human organizational developers already receive training to identify manifestations of bias, oppression, and discrimination (Miller et al., 2018), but developers of HR AI systems, which might increasingly make decisions that impact human lives, do not (Mundy, 2017). ...
... It has also been suggested that AI developers should all receive training to make them more adept at recognizing potential sources of bias in their designs and training data, and that the industry should perhaps require AI developers to have certification of such training (Miller et al., 2018). Human organizational developers already receive training to identify manifestations of bias, oppression, and discrimination (Miller et al., 2018), but developers of HR AI systems, which might increasingly make decisions that impact human lives, do not (Mundy, 2017). Even awareness of bias, however, might not induce developers to correct it if the consequences of such bias are unclear. ...
... There are two problems with this assumption, however. First, some AI algorithms are proprietary (Tambe et al., 2019), including those developed by Google and Amazon (Miller et al., 2018). Second, access to the code of an algorithm might not provide a clear understanding of how it works. ...
... It will be a big ethical issue of forcing students to use these algorithms as part of their education even if they explicitly agree to do consent their privacy (Bulger, 2016;Regan & Steeves, 2019). The issue of AI bias and prejudice in K-12 education has been widely reported by several studies (e.g., Chaudhry & Kazim, 2022;Leaton Gray, 2020;Miller et al., 2018;Murphy, 2019;Steinbauer et al., 2021;Zanetti et al., 2019;Zhang et al., 2022). The bias results in language learning through AI technology appear due to the improper fed of natural language in programming or because of its system error (Zanetti et al., 2019). ...
... For example, while Google translate Turkish equivalent of "she/he is a nurse" into the feminine form, it translates the Turkish equivalent of "she/he is a doctor" into the masculine form (Johnson, 2020). Miller et al. (2018) also revealed societal biases and gender-specific stereotypes when AI system used to translate language. Moreover, bias is apparent in score distribution of students, for example, AI system classifies students automatically who attended private or independent schools and underrepresented groups unfairly. ...
Article
Currently, artificial intelligence (AI) is being rapidly incorporated into K-12 education because of its increasing social and pedagogical importance. The integration of AI in K-12 education is likely to have a profound influence on the lives and learning styles of learners, teaching approaches of teachers, and the whole mechanism of school management systems. As AI technologies are new to K-12 school curriculum, the research on AI for K-12 classrooms is under explored. In this study, the current state of AI integration in school education and risks associated with it were explored. The study using a systematic review method attempted to explore the findings, observations, and results of recent research regarding the possible risk factors of AI integration in K-12 education. At initial search using predefined key terms, 390 articles were recorded. Using inclusion and exclusion criteria, 71 articles covering 34 journals and other publications were selected for final analysis. Selected 71 articles reported that AI innovation incorporation into K-12 education is associated with certain risks and challenges. Through a systematic review technique, we categorized them into major 6 risk areas namely Privacy and Autonomy Risks, AI Biases, Accuracy and Functional Risks, Deepfakes and FATE Risks, Social Knowledge and Skill Building Risks, and Risk in Shifting Teacher's Role. The study also explored these risk areas to provide an overview of how these are connected with teaching and learning process of K-12 education.
... The absence of a regulatory framework for integrating AI applications into education may create ethical and societal risks as it may keep the poor and marginalised at a disadvantage (Miller et al., 2018). The rapid advancement of emerging technologies and the accompanying shift to digital processes pose considerable difficulties for society and across various levels of the education system (Schmidt & Strasser, 2022). ...
Article
Full-text available
Biodata Dr Binu P.M. is a senior faculty member at the English Language Centre of the University of Technology and Applied Sciences, Al Musannah (UTASA), Oman. He has an MA and an MPhil in English Language and Literature, a B.Ed. in Methods of Teaching English, a Cambridge CELTA, and a PhD in ELT. He is the author of the book 'Slow Learners in the English Classroom'. In addition, he has published and presented research papers at several international forums. His professional interests include learning strategies, intercultural communication, computational linguistics, discourse analysis and technology integration in English language education. ORCID id: https://orcid.org/ Abstract: In this study, I explore the effects of the transition from technology integration to the application of artificial intelligence (AI) in English language education and uncover the affordances and challenges associated with this shift. As technology evolves rapidly, ELT practitioners eagerly embrace modern education apps to enhance language learning experiences. Large Language Models (LLMs) and generative chatbots offer teachers various opportunities in lesson preparation and delivery, assessment, feedback, student advising and independent learning. However, the widespread use of AI tools in education has become a challenge to academic integrity, mainly due to learners' misuse of generative chatbots like ChatGPT. Although some sophisticated self-learning chatbots contribute to learner autonomy, they pose serious ethical challenges to data privacy and teacher roles and create a digital divide. By examining the affordances and challenges of integrating artificial intelligence into English language education, I recommend that educators and researchers reflect critically on the implications of AI in ELT and gain insights into effective strategies for harnessing the potential of AI to transform language education rather than rejecting the latest innovation in educational technology as a threat. While admitting the large-scale misuse or abuse of AI tools by students and researchers, I highlight their affordances in English language teaching, learning and research and propose creating a framework to facilitate the legitimate use of AI tools in education.
... Scholars have highlighted the importance of Fairness, Accountability, Transparency, and Ethics (FATE) of AI in education, encouraging the use of eXplainable AI (XAI) whereby the reasons for decisions made by AI are transparent and available . The range of ethical issues that needs to be addressed is extensive and nuanced, for instance, perpetuation of existing systemic bias and discrimination, privacy and inappropriate data use as well as amplifying inequity for students from disadvantaged and marginalized groups (Akgun & Greenhow, 2021;Miller et al., 2018). Differences in access to AI platforms have the potential to expand inequality gaps for certain sub-populations, for instance, low-socio economic, female, and Indigenous students (Celik, 2023). ...
Article
Full-text available
There has been widespread media commentary about the potential impact of generative Artificial Intelligence (AI) such as ChatGPT on the Education field, but little examination at scale of how educators believe teaching and assessment should change as a result of generative AI. This mixed methods study examines the views of educators (n = 318) from a diverse range of teaching levels, experience levels, discipline areas, and regions about the impact of AI on teaching and assessment, the ways that they believe teaching and assessment should change, and the key motivations for changing their practices. The majority of teachers felt that generative AI would have a major or profound impact on teaching and assessment, though a sizeable minority felt it would have a little or no impact. Teaching level, experience, discipline area, region, and gender all significantly influenced perceived impact of generative AI on teaching and assessment. Higher levels of awareness of generative AI predicted higher perceived impact, pointing to the possibility of an ‘ignorance effect’. Thematic analysis revealed the specific curriculum, pedagogy, and assessment changes that teachers feel are needed as a result of generative AI, which centre around learning with AI, higher-order thinking, ethical values, a focus on learning processes and face-to-face relational learning. Teachers were most motivated to change their teaching and assessment practices to increase the performance expectancy of their students and themselves. We conclude by discussing the implications of these findings in a world with increasingly prevalent AI.
... De igual manera, Miller et al. (2018) advirtieron sobre los posibles peligros de utilizar datos sociales, incluyendo prejuicios humanos, para entrenar sistemas de IA, lo que podría llevar a procesos de toma de decisiones sesgados. Del mismo modo, Akgun and Greenhow (2022) informan sobre los riesgos de adoptar tecnologías basadas en IA en la academia, incluyendo la posible preservación de sesgos sistémicos existentes y discriminación, la perpetuación de injusticias para estudiantes de grupos históricamente desfavorecidos y marginados, y la amplificación del racismo, el sexismo, la xenofobia y otras prácticas de prejuicio e injusticia. ...
Article
Full-text available
Introducción: Este estudio explora los efectos de los chatbots de Inteligencia Artificial (IA), con un enfoque particular en ChatGPT de OpenAI, en las Instituciones de Educación Superior (IES). Con el rápido avance de la IA, comprender sus implicaciones en el sector educativo se vuelve de suma importancia. Métodos: Utilizando bases de datos como PubMed, IEEE Xplore y Google Scholar, realizamos una búsqueda sistemática de literatura sobre el impacto de los chatbots de Inteligencia Artificial (IA) en las Instituciones de Educación Superior (IES). Nuestros criterios dieron prioridad a los artículos revisados por pares, medios de comunicación destacados y publicaciones en inglés, excluyendo menciones tangenciales de chatbots de IA. Después de la selección, la extracción de datos se centró en los autores, el diseño del estudio y los hallazgos principales. El análisis combinó enfoques descriptivos y temáticos, haciendo hincapié en los patrones y aplicaciones de los chatbots de IA en las IES. Resultados: La revisión de la literatura reveló perspectivas diversas sobre el potencial de ChatGPT en la educación. Entre los beneficios destacados se incluyen el apoyo a la investigación, la calificación automatizada y la mejora de la interacción entre humanos y computadoras. Sin embargo, se identificaron preocupaciones, como la seguridad en las pruebas en línea, el plagio y los impactos más amplios en la sociedad y la economía, como la pérdida de empleos, la brecha en la alfabetización digital y la ansiedad inducida por la IA. El estudio también resaltó la arquitectura transformadora de ChatGPT y sus aplicaciones versátiles en el sector educativo. Además, se destacaron posibles ventajas como la simplificación de la inscripción, la mejora de los servicios estudiantiles, el fortalecimiento de la enseñanza, la ayuda a la investigación y el aumento de la retención estudiantil. Por otro lado, se identificaron riesgos, como la violación de la privacidad, el uso indebido, el sesgo, la desinformación, la disminución de la interacción humana y problemas de accesibilidad. Discusión: Si bien la expansión global de la IA es innegable, existe una necesidad apremiante de una regulación equilibrada en su aplicación dentro de las IES. Se alienta a los miembros del cuerpo docente a utilizar herramientas de IA como ChatGPT de manera proactiva y ética para mitigar riesgos, especialmente el fraude académico. A pesar de las limitaciones del estudio, que incluyen una representación incompleta del efecto general de la IA en la educación y la falta de pautas concretas de integración, es evidente que las tecnologías de IA, como ChatGPT, presentan tanto beneficios significativos como riesgos. El estudio aboga por una integración reflexiva y responsable de dichas tecnologías en las IES.
... This content analysis reveals that the literature on AI in education focuses on potential benefits and challenges associated with its use, AI literacy skills for students and educators, and a human-centered approach to designing AI systems. Among these themes, challenges related to the use of AI stand out as particularly significant [19], [9], [11], [17], [21], [22] & [23]. The analysis identified the need for high-quality data, difficulties in integrating AI technology into existing educational systems and practices, ethical concerns, and data privacy and security concerns as the most critical challenges. ...
Article
Full-text available
AI is rapidly being utilized in education, potentially altering the learning process by enhancing outcomes, increasing efficiency, and enriching the educational experience. However, the use of AI in education raises several concerns, including the need for high-quality data, ethical and privacy concerns, challenges in integrating AI technology into existing educational systems and practices, and a lack of expertise and knowledge of AI among educators and learners. This study investigated these topics by doing a content analysis of one of the most recent issues of the journal Educational Technology and Society (ET&S), with an emphasis on AI in education. The study adopted qualitative content analysis to identify and categorize the prevalent themes and issues connected with AI in education, such as possible advantages, challenges and risks, ethical and privacy concerns, and the need for additional AI education and training. The findings of this study suggest that a thoughtful and careful approach is necessary for integrating AI into education, focusing on addressing critical challenges identified in this study.
... In the same way, Miller et al. (2018) cautioned about the potential perils of using social data, including human prejudice to train AI systems, which could lead to prejudicial decision-making processes. Similarly, Akgun and Greenhow (2022) inform the risks of adopting AI-based technologies in academia, including the likely preservation of prevailing systemic bias and discrimination, the perpetuation of unfairness for students from historically deprived and marginalized groups, and magnification of racism, sexism, xenophobia, and other practices of prejudice and injustice. ...
Article
Full-text available
Introduction This study explores the effects of Artificial Intelligence (AI) chatbots, with a particular focus on OpenAI’s ChatGPT, on Higher Education Institutions (HEIs). With the rapid advancement of AI, understanding its implications in the educational sector becomes paramount. Methods Utilizing databases like PubMed, IEEE Xplore, and Google Scholar, we systematically searched for literature on AI chatbots’ impact on HEIs. Our criteria prioritized peer-reviewed articles, prominent media outlets, and English publications, excluding tangential AI chatbot mentions. After selection, data extraction focused on authors, study design, and primary findings. The analysis combined descriptive and thematic approaches, emphasizing patterns and applications of AI chatbots in HEIs. Results The literature review revealed diverse perspectives on ChatGPT’s potential in education. Notable benefits include research support, automated grading, and enhanced human-computer interaction. However, concerns such as online testing security, plagiarism, and broader societal and economic impacts like job displacement, the digital literacy gap, and AI-induced anxiety were identified. The study also underscored the transformative architecture of ChatGPT and its versatile applications in the educational sector. Furthermore, potential advantages like streamlined enrollment, improved student services, teaching enhancements, research aid, and increased student retention were highlighted. Conversely, risks such as privacy breaches, misuse, bias, misinformation, decreased human interaction, and accessibility issues were identified. Discussion While AI’s global expansion is undeniable, there is a pressing need for balanced regulation in its application within HEIs. Faculty members are encouraged to utilize AI tools like ChatGPT proactively and ethically to mitigate risks, especially academic fraud. Despite the study’s limitations, including an incomplete representation of AI’s overall effect on education and the absence of concrete integration guidelines, it is evident that AI technologies like ChatGPT present both significant benefits and risks. The study advocates for a thoughtful and responsible integration of such technologies within HEIs.
... In the field of inclusive healthcare, a considerable risk is that technologically advanced healthcare solutions are being developed mostly in high-income countries, which can be mitigated if a responsible and sustainable approach is followed for advancing AI-enabled healthcare systems also in middleand low-income countries as well (Alami et al., 2020). Other scientific articles report on inclusive growth (Dub e et al., 2018;Fleissner, 2018), inclusive innovation and sustainability (Visvizi et al., 2018), and inclusive organizational environment (Miller et al., 2018), as these are shaped by the use of AI technology. ...
Article
Full-text available
Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human's cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies.
... Discriminatory advertisements reduce the opportunities of job seekers with workplace diversity and violate the principle of equality to some extent (Abou Hamdan 2019). Companies should provide transparency about the algorithm development process, and the training of program developers to prevent unconscious bias (Miller et al. 2018). ...
Article
Full-text available
In the global war for talent, traditional recruiting methods are failing to cope with the talent competition, so employers need the right recruiting tools to fill open positions. First, we explore how talent acquisition has transitioned from digital 1.0 to 3.0 (AI-enabled) as the digital tool redesigns business. The technology of artificial intelligence has facilitated the daily work of recruiters and improved recruitment efficiency. Further, the study analyzes that AI plays an important role in each stage of recruitment, such as recruitment promotion, job search, application, screening, assessment, and coordination. Next, after interviewing with AI recruitment stakeholders (recruiters, managers, and applicants), the study discusses their acceptance criteria for each recruitment stage; stakeholders also raised concerns about AI recruitment. Finally, we suggest that managers need to be concerned about the cost of AI recruitment, legal privacy, recruitment bias, and the possibility of replacing recruiters. Overall, the study answers the following questions: (1) How artificial intelligence is used in various stages of the recruitment process. (2) Stakeholder (applicants, recruiters, managers) perceptions of AI application in recruitment. (3) Suggestions for managers to adopt AI in recruitment. In general, the discussion will contribute to the study of the use of AI in recruitment, as well as providing recommendations for implementing AI recruitment in practice.
Conference Paper
Numerous businesses, particularly education, are being rapidly transformed by artificial intelligence (AI). AI is being utilised in school management to enhance student results, the learning experience, and administrative chores. This study intends to investigate the use of AI in educational management, comprising advantages and disadvantages. The study effort examines the literature on AI in school management using a systematic review method. According to the research, AI has several benefits, such as increased student engagement, personalised instruction, and cost effectiveness. But AI also presents several difficulties, including ethical issues, possible biases, and the necessity to reskill the workforce. The study finds that AI may significantly enhance educational management, but its application must be done so carefully and cautiously.
Article
Full-text available
Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicate a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the Web. Our results indicate that text corpora contain re-coverable and accurate imprints of our historic biases, whether morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.
Article
In this landmark book, Scott Page redefines the way we understand ourselves in relation to one another. The Difference is about how we think in groups--and how our collective wisdom exceeds the sum of its parts. Why can teams of people find better solutions than brilliant individuals working alone? And why are the best group decisions and predictions those that draw upon the very qualities that make each of us unique? The answers lie in diversity--not what we look like outside, but what we look like within, our distinct tools and abilities. The Difference reveals that progress and innovation may depend less on lone thinkers with enormous IQs than on diverse people working together and capitalizing on their individuality. Page shows how groups that display a range of perspectives outperform groups of like-minded experts. Diversity yields superior outcomes, and Page proves it using his own cutting-edge research. Moving beyond the politics that cloud standard debates about diversity, he explains why difference beats out homogeneity, whether you're talking about citizens in a democracy or scientists in the laboratory. He examines practical ways to apply diversity's logic to a host of problems, and along the way offers fascinating and surprising examples, from the redesign of the Chicago "El" to the truth about where we store our ketchup. Page changes the way we understand diversity--how to harness its untapped potential, how to understand and avoid its traps, and how we can leverage our differences for the benefit of all.
Hey tech giants: How about action on diversity
  • D Alba
Alba, D. (2017, March 31). Hey tech giants: How about action on diversity, not just reports? Wired. Retrieved from https:// www.wired.com/2017/03/hey-tech-giantsaction-diversity-not-just-reports/
How AI is changing your job hunt
  • J Alsever
Alsever, J. (2017, May 19). How AI is changing your job hunt. Fortune. Retrieved from http://fortune.com/2017/05/19/ ai-changing-jobs-hiring-recruiting/
Minority neighborhoods pay higher car insurance premiums than white areas with the same risk
  • J Angwin
  • J Larson
  • L Kirchner
  • S Mattu
Angwin, J., Larson, J., Kirchner, L., & Mattu, S. (2017, April 5). Minority neighborhoods pay higher car insurance premiums than white areas with the same risk. ProPublica. Retrieved from https://www.propublica.org/article/ minority-neighborhoods-higher-car-insurance-premiums-white-areas-same-risk
Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks
  • J Angwin
  • J Larson
  • S Mattu
  • L Kirchner
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica. Retrieved from https://www.propublica. org/article/machine-bias-risk-assessmentsin-criminal-sentencing
Google mistakenly tags black people as 'gorillas,' showing limits of algorithms
  • A Barr
Barr, A. (2015, July 1). Google mistakenly tags black people as 'gorillas,' showing limits of algorithms. The Wall Street Journal, retrieved from https://blogs.wsj. com/digits/2015/07/01/google-mistakenlytags-black-people-as-gorillas-showinglimits-of-algorithms/
Can artificial intelligence make hiring less biased? Fast Company
  • S Captain
Captain, S. (2016, May 18). Can artificial intelligence make hiring less biased? Fast Company. Retrieved from https:// www.fastcompany.com/3059773/ we-tested-artificial-intelligence-platformsto-see-if-theyre-really-less-
Artificial intelligence has a 'sea of dudes' problem. Bloomberg Technology
  • J Clark
Clark, J. (2016, June 23). Artificial intelligence has a 'sea of dudes' problem. Bloomberg Technology, retrieved from https://www.bloomberg.com/news/articles/2016-06-23/artificial-intelligence-hasa-sea-of-dudes-problem