ArticlePDF Available

The Ghost In The Machine, Or The Ghost In Organizational Theory? A Complementary View On The Use Of Machine Learning

Authors:

Abstract

n/a
1
The Ghost In The Machine, Or The Ghost In Organizational Theory? A Complementary
View On The Use Of Machine Learning
Forthcoming in Academy of Management Review
Dirk Lindebaum
Grenoble Ecole de Management, France
mail@dirklindebaum.EU
- dirklindebaum.EU -
Mehreen Ashraf
Cardiff Business School, UK
AshrafM2@cardiff.ac.uk
A dialogue piece written for AMR in response to this study:
Leavitt, K., Schabram, K., Hariharan, P., & Barnes, C. M. (2020). Ghost in the Machine: On
Organizational Theory in the Age of Machine Learning. Academy of Management Review.
doi:10.5465/amr.2019.0247
2
Within the span of one year only, interest in the topic of machine learning (ML) and algorithms
has accelerated in AMR (see, e.g., Balasubramanian, Ye, & Xu, 2020; Lindebaum, Vesa, & den
Hond, 2020). The article by Leavitt et al. (2020) on OT in the age of ML speaks to this debate.
They define ML as “a broad subset of artificial intelligence, wherein a computer program applies
algorithms and statistical models to construct complex patterns of inference within data”. The
intention is to expand their article “towards the understanding that ML can function as a powerful
catalyst for the next chapter in the evolution of knowledge generation within organizational
scholarship when [it] is properly matched with theory” (italics in original). To address this aim,
descriptions of current approaches to ML (i.e., supervised, reinforcement & unsupervised) are
provided, which are then linked to the deductive, abductive, and inductive processes we use in OT.
Therefore, ML constitutes a “novel tool in our epistemological kit”. Especially when ML is applied
in inductive research, so it is claimed, the “tolerance for surprise results” is magnified when
theorists bridge “considerations of ML to research and theory”, and thereby secure possibilities
for “how ML and theory may best play synergistic roles”. In sum, the take-home message is that
“organizational scholars must significantly adapt their theory building pursuits to the age of ML”.
There is much to be admired about in their article, such as the detailed description of how the
various approaches to ML can potentially be mapped onto the aforementioned epistemological
processes in OT. Thus, we welcome their article as a catalyst for intellectual stimulation. Despite
this, there is a need to critically interrogate some of the article’s basic assumptions. Two reasons
require said interrogation. First, the article is conspicuous by ontological neglect. By advocating
the use of ML to advance theory, this neglect implies that the role of science is reduced to a
positivist Weltanschauung only. This relates to the second issue; the article reduces the task of
science to essentially prediction only, thereby not only marginalizing the branch of social science
concerned with understanding, but also failing in its aim to explain social phenomena. To advance
the debate, we believe it is crucial to draw clearer boundaries around the promises of using ML in
OT for theory generating purposes. Overcoming these boundaries to properly ‘explain’ and
3
‘understand’ social phenomena will likely require technological progress to an extent we believe
infeasible any time soon. We briefly elaborate on the latter two points at the end of this essay.
ONTOLOGICAL NEGLECT
Leavitt et al. (2020) emphasize ML as a new tool in our epistemological toolbox. However, since
ontological questions are prior to epistemological ones (as the former constraints answers to the
latter, see Guba & Lincoln, 1994), this emphasis appears premature. Explicitly recognizing the
nature of social reality enables researchers to continuously scrutinise and correct the latter in order
to make the world a better place (Lawson, 2019). Thus, “understanding [a phenomenon’s] . . .
essential properties allows us to relate to, or interact with, it in more knowledgeable and competent
ways” (Lawson, 2019: 3). Practically, an ontological appreciation means to have a tool at hand to
render social interventions – for which the introduction of technology qualifies (Moser, den Hond,
& Lindebaum, 2021) – more likely to succeed (Lawson, 2019). In short, ontological appreciation
is closely linked to being able to explain relationships between constructs in more knowledgeable
ways.
The ontological problem residing in the Leavitt et al. article is the imposition of an ontological
straitjacket à la positivism on OT as a whole, including the inductive tradition. Specifically, the
way that ML is advocated resembles the description of three key tenets of positivism, two of which
are of particular relevance in this section. The first is methodological monism, defined as the “unity
of scientific methods amidst the diversity of subject matter of scientific investigation” (Von
Wright, 1971: 4). No matter if theorists have deductive, abductive, or inductive dispositions, and
the variety of methodological approaches that come with these traditionally, all can be subsumed
under the paradigmatic strictures of positivism as programmed into ML codes. We discern three
issues here. First, for inductive research, this stands in contrast to the message that “unpacking
new theory requires scholars to take advantage of the breadth and variety of approaches to
qualitative research” (Bansal, Smith, & Vaara, 2018: 1189). Second, some scholars worry that the
application of positivist quality standards, like transparency and replicability, to qualitative data is
“unhelpful and potentially even dangerous” because of the danger to inappropriately import “the
4
logics developed largely in experimental social psychology to the field-based, qualitative, and
theory-generating side of our [qualitative] field” (Pratt, Kaplan, & Whittington, 2020: 2). Third, it
runs counter to recent calls to develop “interparadigmatic appreciation in action; that is, feeling at
ease in moving between paradigms and the genres of writing they represent”, simply because the
question or topic at hand requires that (Lindebaum & Wright, 2021: italics in original). Thus, the
methodological monism that shines through in Leavitt et al.’s advocacy of ML entails the loss of
diversity of research approaches, a conflation of evaluative logics between quantitative and
qualitative research, and a lost opportunity for dialogue amongst paradigms.
The use of ML also corresponds to another tenet of positivism, namely, that mathematics sets
the “methodological ideal of standard which measures the degree of development and perfection
of all the other sciences, including the humanities” (Von Wright, 1971: 4). With this in mind, it
does not surprise that some argue that increased quantification represents a hallmark of scientific
maturity as (a point critiqued by Guba & Lincoln, 1994). As Lindebaum et al. (2020) argue, ML
operates on the assumption of formal rationally (or Zweckrationalität), which legitimizes means-
end calculations and dependence on abstract and universally valid rules. When “brute calculation
reigns with regard to abstract rules, decisions are arrived at “without regard to persons”
(Lindebaum et al., 2020: 253). Not only that; an unbridled pursuit of the possibilities of ML can
also entail that, eventually, substantive rationality (or Werterationalität) is transformed into formal
rationality through formalization. It is at this juncture, for example, that human judgement based
on deliberative imagination and emotional attunement to the situation at hand is substituted by
‘reckoning’ grounded in the calculative (formal) rationality of present-day computers (Moser et
al., 2021). This is exactly what the article insinuates when, in the context of ML based on
unsupervised learning applied to qualitative data (e.g., news stories), “the algorithm independently
explores unlabelled data, extracting and constructing hidden patterns and structure”. To detect
these hidden structures, we do see potential merit in the use of unsupervised learning. For instance,
in order to detect patterns of racial bias inherent in the reporting of crimes across a nation’s regions
5
- which readily yields millions of data points
1
- it could be useful to enlist ML to probe deeper into
those hidden structures concerning potential racial biases. However, while we can see that the
greater computational powers that ML affords can be usefully applied to such an example, we
need to underline that such applications largely remain atheoretical in kind. Next, we related the
issue of ontological neglect to reduced scope for explanation and understanding.
REDUCED SCOPE FOR EXPLANATION & UNDERSTANDING
A third tenet of positivism concerns prediction and explanation (Von Wright, 1971). Here, we
come full circle with Leavitt et al.’s article. They refer to the aim of ‘science’ as being twofold: (i)
predicting variance and (ii) providing explanation for the predicted variance. While this is
consistent with positivism, they offer an incomplete image of the role of social science, and it
appears internally inconsistent too when one considers their article in toto.
It is incomplete because the branch of social science concerned with understanding (or
verstehen) of phenomena in their historical context, or the “re-creation in the mind of the scholar
of the mental atmosphere, the thoughts and feelings and motivations, of the objects of his [or her]
study” (Von Wright, 1971: 6), is not recognized in the article. But it matters, because it underlines
that both participants and researchers are entwined as co-producers of knowledge. The researcher
attempts to establish that recreation through empathic probing designed to better understand the
intentions of participants in their relative and local context (as per constructivisim), or in their
social, political, or economic context as crystallized over time (as per critical theory, see Guba &
Lincoln, 1994). This is consistent with the etic/emic dilemma in social science
2
. Also, while ML
may screen large digital qualitative data sets, it can only ever analyse data that are ‘out there at a
given moment in time’. It cannot react to subtle changes in facial expressions, fluctuations in
intonations, speech pauses, or nervous finger tapping on the desk that would prompt the researcher
to ask a different question, or to ask the question differently, in response to these cues. Doing so
1
See https://www.ndr.de/fernsehen/sendungen/panorama3/Polizei-nennt-Nationalitaeten-regional-sehr-
unterschiedlich,polizeimeldungen102.html , accessed 23 March 2021.
2
The etic, or outsider, perspective applied to an inquiry by a researcher “may have little or no meaning within the
emic (insider) view of the studied” subject(s) as hand (Guba & Lincoln, 1994: 106). Qualitative data thus serve to
reveal emic views.
6
would also entail a different response from participants. Thus, what is the meaning of ‘data points’
(i.e., specific observations), if we do not understand their contextual origin?
But the message is also internally inconsistent in their article. On the one hand, Leavitt et al.
recognize the limits of ML in relation to being able to offer explanation. In their words, “algorithms
generated by ML are optimized for detecting patterns, but generally fail to explain ‘why’ such
patterns occur”. However, we argue that this caveat gets crowded out in the remainder of the
article, which exerts much space on touting the benefits of ML. Thus, to better inform future
decisions on the application of ML to deductive, abductive and inductive research, it is crucial to
elaborate that, in the case of more advanced ML algorithms, we no longer understand precisely
how ML algorithms go about fulfilling their performance criteria, because they are essentially
‘black boxes’ in their processing of data (Lindebaum et al., 2020). What concerns us is that the
calculus informing ML generated outputs is often even incomprehensible to its creators. Where
does that leave our ability to explain social phenomena? How much at ease can or should we feel
in ‘discovering’ intriguing news ‘facts’ that we cannot fully, or at all, explain? What is the use of
developing a “tolerance for surprise results” when we cannot explain, much less understand, social
phenomena? This exactly pertains to Suddaby’s (2014) caution about ‘dustbowl empiricism’,
because in the absence of a conceptual framework, dustbowl empiricism is likely to fail. That
failure is due to conceptual frameworks being relegated to the backstage, rendering theory more
implicit. When theories are implicit, “they discourage researchers from asking fundamental
questions about the assumptions that underpin knowledge and the methods used to acquire
knowledge” (Suddaby, 2014: 408). Therefore, when Leavitt et al. argue that knowledge generation
in OT can powerfully proceed with the aid of ML if the latter is matched with theory, we discern
a fundamental tension between their advocacy and Suddaby’s (2014) concerns around the
atheoretical nature of dustbowl empiricism.
In sum, Leviatt et al. have provided a timely contribution on the role of ML in OT. Yet, the
ontological neglect and reduced scope for explanation (especially in terms of deductive traditions)
7
and understanding (especially in terms of inductive traditions) that follows the use of ML puts
some strain on the authors’ synergetic proposal that ML can advance if matched with theory.
Accordingly, we are not convinced by their claim that “ML can out-predict what theory-driven
science is currently capable of” (for advocacy of theory-driven science, see Bartunek, 2020). Thus,
the boundary has to be applied that any such effort concerns prediction only, where prior
knowledge is harnessed to discourage atheoretical efforts and avoid post-hoc theorizing. To foster
explanation and understanding through use of ML would require ML being capable of perfectly
imitating the human mind in all its diversity and scope for spontaneous, creative and fresh insights.
Thus, whether or not this obstacle can be overcome is a matter for computer scientists to resolve,
and for now outside the theoretical conundrums we often grapple with. While we are open to
technological innovations, we are not there yet, and perhaps may never be.
REFERENCES
Balasubramanian, N., Ye, Y., & Xu, M. 2020. Substituting Human Decision-Making with
Machine Learning: Implications for Organizational Learning. Academy of Management
Review.
Bansal, P., Smith, W. K., & Vaara, E. 2018. New Ways of Seeing through Qualitative Research.
Academy of Management Journal, 61(4): 1189-1195.
Bartunek, J. M. 2020. Theory (What Is It Good For?). Academy of Management Learning &
Education, 19(2): 223-226.
Guba, E. G., & Lincoln, Y. S. 1994. Competing paradigms in qualitative research. In N. K.
Denzin, & Y. S. Lincoln (Eds.), Handbook of qualitative research: 105-117. Thousand
Oaks: Sage.
Lawson, T. 2019. The Nature of Social Reality - Issue in Social Ontology. New York:
Routledge.
Leavitt, K., Schabram, K., Hariharan, P., & Barnes, C. M. 2020. Ghost in the Machine: On
Organizational Theory in the Age of Machine Learning. Academy of Management
Review.
Lindebaum, D., Vesa, M., & den Hond, F. 2020. Insights From “The Machine Stops” to Better
Understand Rational Assumptions in Algorithmic Decision Making and Its Implications
for Organizations. Academy of Management Review, 45(1): 247-263.
Lindebaum, D., & Wright, A. L. 2021. From the editors: Imagining scientific articles and essays
as productive co-existence. Academy of Management Learning & Education.
Moser, C., den Hond, F., & Lindebaum, D. 2021. Exemplary Contribution: Morality in the age
of artificially intelligent algorithms. Academy of Management Learning & Education.
Pratt, M. G., Kaplan, S., & Whittington, R. 2020. Editorial Essay: The Tumult over
Transparency: Decoupling Transparency from Replication in Establishing Trustworthy
Qualitative Research. Administrative Science Quarterly, 65(1): 1-19.
Suddaby, R. 2014. Editor's Comments: Why Theory? Academy of Management Review, 39(4):
407-411.
Von Wright, G. H. 1971. Explanation and Understanding. New York: Cornell University Press.
... AI differs from human intelligence and decision-making in critical ways (Balasubramanian et al. 2020). AI algorithms often utilize machine learning techniques, which are grounded on an ontology that prioritizes (and is constrained to) prediction rather than explanation or granular understanding (Lindebaum & Ashraf, 2021). The predictive and pattern detecting processes that comprise AI algorithms are "made possible by the availability of highly advanced correlational, clustering, and regression analyses and other techniques of pattern recognition" (Lindebaum et al., 2020: 256). ...
... For the simplicity of presentation, this paper does not adopt this practice; however, the importance of this distinction is noted. programmed according to the positivist paradigm (Lindebaum & Ashraf, 2021), AI is based on formal rationality. Formal rationality involves "following abstract and formal procedures, rules, and laws, which are taken as unproblematic and legitimate fixed ends" (Lindebaum et al., 2020: 248). ...
... The formal rationality of AI is contrasted with the substantive rationality that characterizes human intelligence, decision-making, and learning. Substantive rationality is based on judgments, which are based on value-laden reflection involving imagination, morality, empathy, and emotional attunement to the specifics of situations and contexts (Lindebaum & Ashraf, 2021;Lindebaum et al., 2020: 248;Moser et al., 2020). Because judgment incorporates imagination and values "substantive rationality contains the possibility to normatively see "the world as it might be" (Suddaby, 2014: 408), involving "what is," "what can," and "what ought to be" in empirical, moral, and aesthetic terms." ...
Article
Full-text available
Purpose Entrepreneurs are increasingly relying on artificial intelligence (AI) to assist in creating and scaling new ventures. Research on entrepreneurs’ use of AI algorithms (machine learning, natural language processing, artificial neural networks) has focused on the intra-organizational implications of AI. The purpose of this paper is to explore how entrepreneurs’ adoption of AI influences their inter- and meta-organizational relationships. Design/methodology/approach To address the limited understanding of the consequences of AI for communities of entrepreneurs, this paper develops a theory to explain how AI algorithms influence the micro (entrepreneur) and macro (system) dynamics of entrepreneurial ecosystems. Findings The theory’s main insight is that substituting AI for entrepreneurial ecosystem interactions influences not only entrepreneurs’ pursuit of opportunities but also the coordination of their local entrepreneurial ecosystems. Originality/value The theory contributes by drawing attention to the inter-organizational implications of AI, explaining how the decision to substitute AI for human interactions is a micro-foundation of ecosystems, and motivating a research agenda at the intersection of AI and entrepreneurial ecosystems.
... In the field of organizational studies, debates about the interactions between humans and non-humans, specifically concerning the use of artificial intelligence, are not new (Leavitt et al., 2020;Zuboff, 1988). With the accelerated advancement of computational capacity and data processing, onto-epistemological opportunities and challenges have arisen regarding the production of science with technological artifacts (Lindebaum & Ashraf, 2021), as well as its impacts on the educational process. Debates such as those proposed by Kerlinger (1973) have already been put into the discussion from a managerial perspective, regarding how artificial intelligence (AI) could be a skillful research tool in one of the required functions of management science (Kerlinger, 1973): predicting variation. ...
... Despite the apparent potential for academic writing, it is important to remember that AI operates based on formal calculative rationality that legitimizes results through probabilistic calculations, submitted to abstract rules (not free of biases) and "universally" valid assumptions (Lindebaum & Ashraf, 2021). This fact builds knowledge production from a kind of automated "ontological blindness" (Cunliffe, 2022) that disregards the influence of the researcher's beliefs about the nature of social and organizational realities in the theorizing process. ...
Article
Full-text available
The advancement of the use of Artificial intelligence in the scientific field, such as Connectedpapers and ChatGPT, has allowed us to reflect on how technological tools have become mediators and participants in the context of education and academia. In the field of organizational theories, despite the different perspectives on understanding the incorporation of AIs in academic practice, we highlight two challenges in our daily academic life. The first challenge refers to confronting the digital colonialism that AIs impose on us, considering that they constitute themselves through the reproduction of language models programmed in countries of the "global north” The second challenge concerns its unfoldings in the process of automation of academic writing in administration. We consider the need to reflect on how the uses of AIs can contemporarily reproduce our place in the field of science as one of scientific data extractivism, the limitation of the teaching of academic writing in administration as the reproduction of an "assisted programming" of hegemonic language models, and the possibilities of disentangling as a way of counteracting this dynamic of automation of article writing in administration.
... As ML techniques are capable of enriching our understanding of intricate variable relationships, DDM and conventional methods should thrive simultaneously for methodological rigor in OB. Methodological monism in ML results in the loss of dialogue among paradigms (Lindebaum & Ashraf, 2021). Hence, in keeping with the perspectives of positivism and interpretivism and in adopting methodological pluralism for OB research programs, this essay argues that while conventional quantitative and qualitative methods continue to be utilized to explain and understand behaviors (Buchanan, 1998;Bryman & Bell, 2003), applying ML-based algorithms is recommended when the goals are exploration of patterns, investigation of complex relationships among variables, and precision (in research models and in answers to research questions). ...
... Using AI in research methodologies adheres to abductive inquiry, wherein unexpected observations made by ML add value for inductive theorizing and subsequent hypothesis generation (Doornenbal et al., 2021). In acknowledging concerns relating to the lack of transparency in decision-making in ML (Lindebaum & Ashraf, 2021), the conditional applications of grey box techniques would enrich and complement existing core methodologies rather than competing with them (Leavitt, Schabram, Hariharan, & Barnes, 2020). ...
Article
Full-text available
Purpose Emerging technologies are capable of enhancing organizational- and individual-level outcomes. The organizational behavior (OB) field is beginning to pursue opportunities for researching emerging technologies. This study aims to describe a framework consisting of white, black and grey boxes to demonstrate the tight coupling of phenomena and paradigms in the field and discusses deconstructing OB’s white box to encourage data-driven phenomena to coexist in the spatial framework. Design/methodology/approach A scoping literature review was conducted to offer a preliminary assessment of technology-oriented research currently occurring in OB. Findings The literature search revealed two findings. First, the number of published papers on emerging technologies in top management journals has been increasing at a steady pace. Second, various theoretical perspectives at the micro- and macro- organizational level have been used so far for conducting technology-oriented research. Originality/value By conducting a scoping review of emerging technologies research in OB literature, this paper reveals a conceptual black box relating to technology-oriented research. The essay advocates for loosening OB’s tightly coupled white box to incorporate emerging technologies both as a phenomenon and as data analytical techniques.
... Moving forward, I hope to see this appreciation continue, with our AI assistants that are used at various stages of the literature review process being reported (e.g., in a PRISMA flow diagram). In addition to aiding literature reviews as reported earlier, machine learning and other AI techniques are being used to complement theoretical development (see Leavitt et al., 2021;Lindebaum & Ashraf, 2023). This will need acknowledgement too. ...
... Decision-making is an important aspect of using AI when it comes to ethics, and all (at least the vast majority of) our decisions have moral components. In this paper we do not engage with particular application areas of AI, such as medical diagnosis (Davenport & Glaser, 2022;Davenport & Glover, 2018;Göndöcs & Dörfler, 2023), we locate our interest loosely in organizations (Csaszar & Steinberger, 2022;Davenport & Euchner, 2023;Davenport & Miller, 2022;Glikson & Woolley, 2020;Grodal et al., 2023;von Krogh, 2018;Leavitt et al., 2021;Lindebaum & Ashraf, 2021), in which concept we include business organizations, government institutions, as well as organizations, such as hospitals and universities, regardless of whether they are for profit or not. We are conscious of the organizational learning aspects and implications of AI (Balasubramanian et al., 2020;Davenport & Ammanath, 2020;Davenport & Mittal, 2022Göndöcs & Dörfler, 2022;Oliver et al., 2017;Pachidi et al., 2021;Raisch & Krakowski, 2021;Tschang & Almirall, 2021); although we cannot tackle these at a great depth here. ...
Conference Paper
Full-text available
We paraphrase Descartes' famous dictum in the area of AI ethics where the "I doubt and therefore I am" is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. The foundation of our argument is the discipline of ethics, one of the oldest and largest knowledge projects of human history, yet, we seem only to be beginning to get a grasp of it. After a couple of thousand years of studying the ethics of humans, we (humans) arrived at a point where moral psychology suggests that our moral decisions are intuitive, and all the models from ethics become relevant only when we explain ourselves. This recognition has a major impact on what and how we can do regarding AI ethics. We do not offer a solution, we explore some ideas and leave the problem open, but we hope somewhat better understood than before our study.
... Hence, different degrees of value mechanization can influence different consequences in practice, both for better or worse, as we suggest later. Our essay is, therefore, also designed as a call for action to sensitize 6 Of course, we recognize that technique or technology can be put to prosocial use, when, for instance, algorithms are used for early onset diagnosis of motor degenerative diseases (Nalls et al., 2015), or the computer-aided discovery of latent topics in a set of textual data (Hannigan et al., 2019; but also see Lindebaum & Ashraf, 2021), such as ethnic biases in public police statements about crime incidents (Eckert et al., 2021). However, unchecked, the relentless pursuit of technique and application of technology can cause massive individual and societal harm (cf. ...
Article
Full-text available
We review Jacques Ellul’s book The Technological Society to highlight ‘technique’ – the book’s central phenomenon – and its theoretical relevance for organizational and institutional theorists. Technique is defined as “the totality of methods rationally arrived at and having absolute efficiency . . . in every field of human activity” in society (1964: xxv, italics added). More than simply ‘machine technology’, technique involves the rational pursuit of standardized means or practices for attaining predetermined results. What makes Ellul both unique and relevant for organizational and institutional theorists is his historical analysis delineating the characteristics of, and the processes through which, technique has evolved into an autonomic and agentic force. We build on and mobilize Ellul’s analysis to explore two aims in this essay. First, we aim to illuminate the process through which technique transforms values – a process we describe as the mechanization of values in organizations and institutions. Second, we identify the consequences of value mechanization for organizational scholarship. We discuss the wider ramifications of Ellul’s work for management theory, practise, and education.
... As with most NLP techniques using deep learning models, it is difficult to determine how BERT actually captures writing quality, making it a challenge to understand how these scores influence investment outcomes or theorize what actually shapes these patterns. As management scholars are currently debating whether and how to incorporate advanced computational methods into our field (Leavitt et al., 2020;Lindebaum and Ashraf, 2021), our post-hoc task may represent one approach for future studies to shed more light on this process by isolating the underlying mechanisms mediating the relationship between average BERT score and funding outcomes. ...
Article
Full-text available
We explore how natural language processing can be applied to predict crowdfunding outcomes. Using the Bidirectional Encoder Representations from Transformers (BERT) technique, we find that crowdfunding projects that use a story section description with a higher average BERT score (indicating a lower quality of writing) tend to raise more funding than those with lower average BERT scores. In contrast, risk descriptions that have higher BERT scores tend to receive less funding and attract fewer backers. These relationships remain consistent after controlling for various traditional readability indices, highlighting the potential benefits of incorporating natural language processing techniques in entrepreneurship research.
Article
Full-text available
Resumo O avanço da utilização das Inteligências Artificiais (IAs) no campo científico, a exemplo de Connected Papers e ChatGPT, tem nos possibilitado refletir sobre como ferramentas tecnológicas se tornaram mediadores e participantes no contexto da educação e da academia. No campo das teorias organizacionais, a despeito das diferentes perspectivas de compreensão da incorporação das IAs na prática acadêmica, destacamos dois desafios em nosso cotidiano acadêmico. O primeiro desafio refere-se ao enfrentamento do colonialismo digital que as IAs nos impõem, considerando que elas se constituem por meio da reprodução de modelos de linguagem programados em países do “Norte global”. O segundo desafio diz respeito aos seus desdobramentos no processo de automatização da escrita acadêmica em administração. Consideramos a necessidade de se refletir como os usos das IAs podem reproduzir contemporaneamente nosso lugar no campo da ciência como o de extrativismo de dados científicos, a limitação do ensino da escrita acadêmica em administração como sendo a reprodução de uma “programação assistida” de modelos de linguagens hegemônicos e as possibilidades de desenquadrar como forma de contrapor essa dinâmica de automatização da escrita de artigos em administração.
Article
Full-text available
The human-centered AI approach posits a future in which the work done by humans and machines will become ever more interactive and integrated. This article takes human-centered AI one step further. It argues that the integration of human and machine intelligence is achievable only if human organizations—not just individual human workers—are kept “in the loop.” We support this argument with evidence of two case studies in the area of predictive maintenance, by which we show how organizational practices are needed and shape the use of AI/ML. Specifically, organizational processes and outputs such as decision-making workflows, etc. directly influence how AI/ML affects the workplace, and they are crucial for answering our first and second research questions, which address the pre-conditions for keeping humans in the loop and for supporting continuous and reliable functioning of AI-based socio-technical processes. From the empirical cases, we extrapolate a concept of “keeping the organization in the loop” that integrates four different kinds of loops: AI use, AI customization, AI-supported original tasks, and taking contextual changes into account. The analysis culminates in a systematic framework of keeping the organization in the loop look based on interacting organizational practices.
Article
Full-text available
Article
This essay starts from the premise that human judgment is intrinsically linked with learning and adaptation in complex socio-technological environments. Under the illusory veneer of retaining control over algorithmic reckoning, we are concerned that algorithmic reckoning may substitute human judgment in decision-making and thereby change morality in fundamental, perhaps irreversible, ways. We offer an ontological critique of artificially intelligent algorithms to show what is going on ‘under their hood’, especially in cases when human morality is already co-constituted with algorithmic reckoning. We offer a twofold call for (in)action. We offer a call for inaction as far as the substitution of judgment for reckoning through our teaching in business schools and beyond is concerned. We offer a re-invigorated call for action, in particular to teach more pragmatist judgment in our curricula across subjects to foster social life (rather than stifle it through algorithmic reckoning).
Article
With rapid advancements inmachine learning,weconsider the epistemological opportunities presented by thisnovel tool for promoting organizational theory. Ourpaperunfolds in three sections.Webeginwithanoverviewof the three formsofmachinelearning(supervised, reinforcement, and unsupervised), translating these onto our common modes of research (deductive, abductive, inductive, respectively). Next, we present frank critiques ofmachinelearningapplications for science, aswell asof the state of organizational scholarshipwrit large,highlightingcontemporarychallengesinbothdomains.Wedosotomake the case thatmachine learning and theory are not in competition but have the potential to playcomplementaryrolesinmovingourfieldbeyondsiloeddomainsandincrementaltheory. Our final sectionspeaksto thissynergy.Wepropose thatmachinelearningcanact asa tool to test and prunemidrange theory, and as a catalyst to expand the explanatory spectrumthat theory can inhabit. Specifically, we outline how machine learning can support local but perishable theory targeting pragmatic problems in the here and now, and grand theory that is sufficiently bold and generalizable across contexts and time to serve the social-functionalpurposesofinspiringandfacilitatinglong-termepistemologicalprogress across domains.
Article
The richness of organizational learning relies on the ability of humans to develop diverse patterns of action by actively engaging with their environments and applying substantive rationality. The substitution of human decision-making with machine learning has the potential to alter this richness of organizational learning. Though machine learning is significantly faster and seemingly unconstrained by human cognitive limitations and inflexibility, it is not true sentient learning and relies on formal statistical analysis for decision-making. We propose that the distinct differences between human learning and machine learning risk decreasing the within-organizational diversity in organizational routines and the extent of causal, contextual, and general knowledge associated with routines. We theorize that these changes may affect organizational learning by exacerbating the myopia of learning, and highlight some important contingencies that may mute or amplify the risk of such myopia.
Article
Management journals are currently responding to challenges raised by the “replication crisis” in experimental social psychology, leading to new standards for transparency. These approaches are spilling over to qualitative research in unhelpful and potentially even dangerous ways. Advocates for transparency in qualitative research mistakenly couple it with replication. Tying transparency tightly to replication is deeply troublesome for qualitative research, where replication misses the point of what the work seeks to accomplish. We suggest that transparency advocates conflate replication with trustworthiness. We challenge this conflation on both ontological and methodological grounds, and we offer alternatives for how to (and how not to) think about trustworthiness in qualitative research. Management journals need to tackle the core issues raised by this tumult over transparency by identifying solutions for enhanced trustworthiness that recognize the unique strengths and considerations of different methodological approaches in our field.