ArticlePDF Available

THE FUTURE OF ARTIFICIAL INTELLIGENCE AT WORK: A REVIEW ON EFFECTS OF DECISION AUTOMATION AND AUGMENTATION ON WORKERS TARGETED BY ALGORITHMS AND THIRD-PARTY OBSERVERS

Authors:

Abstract

The main purpose of this paper is to review the future of artificial intelligence at work especially how it will affect decision automation and augmentation on workers. Automation and augmentation of decision-making processes in the workplace are becoming more common because of advancements in artificial intelligence. In addition to transforming our homes, smart technologies are also making inroads into a wide range of businesses and causing havoc in the workplace [1]. Although AI can boost productivity, efficiency, and accuracy throughout a company, is this always a good thing for the organization? There is a widespread belief that the advent of AI will result in the replacement of human employees by computers and robots and that this advancement in technology represents a danger rather than an opportunity to improve ourselves. Leaders must understand how AI will affect their workforces and then prepare them by upskilling certain individuals to perform current occupations, but with AI, and retraining and hiring others for the new positions that AI will require. Schools and parents will have to instill in their children a love of STEM as well as a sense of wonder and curiosity for learning throughout their lives [1].
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
40 | P a g e
THE FUTURE OF ARTIFICIAL INTELLIGENCE AT WORK: A REVIEW ON
EFFECTS OF DECISION AUTOMATION AND AUGMENTATION ON WORKERS
TARGETED BY ALGORITHMS AND THIRD-PARTY OBSERVERS
Dhaya Sindhu Battina
Data Engineer & Department of Information Technology
USA
ABSTRACT
The main purpose of this paper is to review the future of artificial intelligence at work especially how it will
affect decision automation and augmentation on workers. Automation and augmentation of decision-making
processes in the workplace are becoming more common because of advancements in artificial intelligence. In
addition to transforming our homes, smart technologies are also making inroads into a wide range of
businesses and causing havoc in the workplace [1]. Although AI can boost productivity, efficiency, and
accuracy throughout a company, is this always a good thing for the organization? There is a widespread belief
that the advent of AI will result in the replacement of human employees by computers and robots and that this
advancement in technology represents a danger rather than an opportunity to improve ourselves. Leaders must
understand how AI will affect their workforces and then prepare them by upskilling certain individuals to
perform current occupations, but with AI, and retraining and hiring others for the new positions that AI will
require. Schools and parents will have to instill in their children a love of STEM as well as a sense of wonder
and curiosity for learning throughout their lives [1].
Keywords: Artificial intelligence, work, recruitment, selection, automation, algorithms.
I. INTRODUCTION
Automation has aided human labor for decades. Production and aviation have always been the main
beneficiaries of automation or duties that supported them. AI and machine learning are now widely used to
automate jobs that were previously performed by humans. As a result, people are assisted in many aspects of
daily life by automation. Among other things, fully automated data processing and advanced analytics systems
assist judges in courts of law, doctors with diagnoses, and executives with high-risk management
responsibilities [1]. Current practical advancements and contemporary research in the field of human resource
management hint at a future in which managers will interact with automated systems to complete duties as
diverse as planning, employee recruitment, and sustainability dimension.
In the past, management automation research has mostly dealt with efficiency and effectiveness issues.
Automated procedures may aid in the evaluation of motivating letters demonstrated the feasibility of using
automated systems to review the interview process. A further study line focuses on whether and how
individuals utilize automated systems for decision assistance and examines whether or not people dislike or
value the advice given by these systems [2].
However, automation's ramifications do not end with consumption or efficiency/effectiveness concerns. The
introduction of automation in traditional domains has been found to have an impact on job duties, motivation,
and overall well-being at work. We believe that managerial work automation can have the same impact [2].
Automation has already had a significant impact on recruiting managers' ability to do their jobs, particularly
in people selection (as a distinct management responsibility). As an example, they may have systems that
collect the application information and provide candidate ratings based on automated screening of resumes or
job interviews. On the one hand, productivity might grow as a result, giving you more time to focus on other
projects. Recruiting managers' processing of information during recruitment is affected by automated
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
41 | P a g e
technologies, which, for example, lessen the requirement to analyze raw application data, and they may think
that their direct impact on the recruitment process is weakened. There may be consequences for how tasks are
viewed in terms of autonomy and responsibility, which might impair motivation and job satisfaction [2].
The adoption of artificial intelligence (AI) and machine learning (ML) to assist administrative activities and
functions has grown in popularity. The adoption of automation in management, in contrast to conventional
applications (e.g., manufacturing or aviation), is not well understood. This research analyzes the impact of
diverse forms of automated decision support systems in staff recruitment as a specialized managerial function
on decision task performance, duration to make a choice, attitudes to work (e.g., pleasure), and self-efficacy
in people choice within a work design framework [3,4].
II. PROBLEM STATEMENT
The main problem that this paper will address is to explore the future of artificial intelligence at work
especially how it will affect decision automation and augmentation on workers. For a long time, engineers
raised the fear that automation and artificial intelligence (AI) would be a disaster for the labor market [5,6].
After then, there was a round of clarifications and guarantees. There is a growing consensus that automation
will neither bring doomsday or utopia but instead will offer both advantages and stress to workers. Because
of this, discussions on the "future of work" are often imprecise and abstract. The fear of AI-powered
automation is overstated for people, companies, and nations with the necessary capabilities. Its benefit to the
economy is enormous.
III. LITERATURE REVIEW
A. Automating and assisting in the selection of new employees
Personnel selection is one area where automated assistance systems for high-level cognitive activities are
developing as a research and practice subject. Using such a system, hiring managers may be provided with
rankings of prospects to help them in the recruiting process [7]. Other research has looked at the use of
computer-assisted interviewing. Some researchers have posited that interviewing may be automated so that
candidates can be screened and a shortlist of the most qualified candidates can be presented to prospective
employees [8].
Fig i: Benefits of AI in selection and recruitment of employees
Personnel selection is becoming more complicated, which may explain the growing usage of such methods.
Hiring managers collect and combine a wide range of information from a potentially huge number of
candidates ( for example, outcomes from intelligence tests, interviews). As well as screening and hiring the
most qualified candidates, they must also take into account some corporate objectives (such as cost and
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
42 | P a g e
diversity) while also complying with regulatory rules (e.g., regarding adverse impact) [8]. There is optimism
that automated decision support systems may assist acquire and combining information for a large number of
candidates, making selection processes more efficient.
To have a better grasp of automation systems for employee recruitment, it is important to look at studies on
automation as well as works on decision support systems [9,10]. Information action is when systems offer
relevant information to the strategic decision after focusing on information retrieval and decision-making
components of automated processes. Automated information collecting, filtering and analysis, decision
suggestions, and action execution are four major categories of automated functions. The level of automation
will rise as more of these tasks are integrated into systems. It's possible that acquiring information will include
automated transcription of videos. It's possible that highlighting terms in these transcripts will help with the
filtering and analysis of data. Classification (e.g., separating acceptable from non-suitable candidates) or
prediction tasks may be fulfilled by systems that give judgment suggestions learned on past data. Once these
activities have produced results, they may be shown to humans to help them make decisions [10].
Technological solutions that acquire and analyze data, as well as make judgments and suggestions based on
it, are exactly what the present article is talking about. As a result, they are useful to recruitment agencies in a
wide variety of tasks (such as collecting and analyzing information, approving or rejecting applicants), and
their findings may serve as an extra (or additional) information source for decision-makers.
Machine learning and artificial intelligence have given rise to automated procedures that may be thought of
as increasingly complicated and sophisticated ways of mechanically obtaining or combining data [11,12].
Some of the innovative ways, in particular, employ sensors (e.g., webcams, microphones) to collect
information from the respondents, natural language processing to collect the information from application
replies, and machine learning algorithms to assess applicant’s ability (how well this innovative type of
mechanical information collecting and combining relates to classic mechanical and clinical methodologies on
validity is an outstanding topic). Insights into the possible consequences of applying modern information
analysis and decision support automation may be gained via research into a mechanical combination of
information (MCI) [13]. Compared to clinical information collecting and combining, it has been demonstrated
that mechanical collection and combination ( for example, combining utilizing ordinary least squares
regression) may increase decision quality (for example, intuition-based combination of information).
However, this research indicated that consumers are wary of the use of automated means of acquiring and
integrating data. Using automated systems in occupations may impact people's behavior and attitudes to their
work, according to some studies, who label this resistance to mechanically integrated information as
"algorithm aversion." [14]
B. Performance and efficiency impact of information processing and decision-support system
Automated data processing and decision-making systems for staff selection may be implemented in a variety
of ways. It is critical to consider the timing of when to present hiring managers with the results of an automated
system, in addition to the particular responsibilities assigned to an automated process ( for example,
information collection, evaluation of information). Decision support systems may be incorporated at
prescribed intervals: before a human decision-maker examines the existing knowledge (support-before-
processing technologies) and then after the decision-maker uses the information. Automation of people
selection systems, for example, might provide automated assistance before processing (such as a list of
automation selection and recruitment solutions and services) [145]. generally, the system evaluates data about
the potential candidate and delivers. Hiring managers obtain these results, as well as more specifics on the
applicants. As a result, they have access to the automated system's output and may check into extra application
data. To put it another way, decision-makers have the option of using the system's suggestion as to their only
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
43 | P a g e
source of information, or they may combine it with other candidate data to arrive at a judgment [15]. When
properly validated, these systems have the potential to boost productivity while also serving as a source of
mechanically integrated information that helps with decision quality. Potential drawbacks include the
possibility of anchoring effects, which limit hiring supervisors' focus to the highest-scoring candidates.
Research from traditional automation domains also shows that individuals initially regard such systems as
extremely dependable, which might cause decision-makers to adopt suggestions without evaluating further,
possibly contradicting information and without properly pondering on the eligibility of applicants (i.e. they
may overtrust the system). Furthermore, individuals may be "limited to the position of receiver of the
machine's answer" when using systems that place assistance before execution. While decision-makers believe
they have fewer opportunities to demonstrate their competence when utilizing these technologies, they may
sense a loss of reputation [15].
Fig ii: Strategic areas where AI impacts recruiting outcomes
Support-after-processing systems, rather than support-before-processing systems, developed as a result of the
latter problems. In addition, they'd handle data processing and assess potential hires. These systems, on the
other hand, provide decision-makers with extra mechanically integrated information that they may employ
after processing the given data. Aside from providing feedback, these algorithms may also critique human
judgments. They don't seek to prove human decision-making is accurate; instead, these technologies serve as
a way to pause and reflect on a previous choice. To present, medical research and practice have provided the
bulk of the research on these systems [16]. For example, a doctor might use available data to make a cancer
diagnosis first (or a therapy plan). The algorithms might utilize this assessment as input and either offer the
doctor the diagnosis they would have given or advice about what aspect of a diagnostic look contradictory
with the information at hand. Given that concerns with support-before-processing systems (like credibility
loss) have been linked to management duties, the usage of support-after-processing systems may not be limited
to medical decision-making. They may, in theory, promote more complete information processing and the
development of sound decision-making arguments. For example, in personnel selection, such algorithms may
offset human heuristics by alerting decision-makers to individuals that were either ignored or rejected too
quickly [17]. Another benefit of these systems is that they serve as a mechanical source of information
combination in addition to support-before-processing systems, which may improve the quality of decisions.
Support-after-processing systems, in contrast to support-before-processing systems, don't have any early
anchoring problems.
Because decision-makers still have to evaluate information that was accessible before the support system was
implemented, it doesn't make the decision-making process any more efficient. Instead, by encouraging
decision-makers to explore extra information and alternate views on the information they already have, they
may further lengthen the time it takes to conclude. Furthermore, the possibility of people being too trusting
cannot be discounted. While automated systems might give recommendations, decision-makers would have
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
44 | P a g e
examined and blended numerous information sources rather than merely depending on them. There's a chance
that this will help them decide whether or not to heed the system's advice.
C. Impacts on knowledge and task characteristics
According to a study on algorithm aversion, decision-makers may be influenced in many ways by adopting
automated assistance systems. There are a variety of reasons decision-makers may feel diminished in
autonomy while using decision support systems. These include expectations of systems not being realized,
and specific design choices within these systems (such as how and when to offer a suggestion) not matching
human information processing. In other words, employing these technologies for managerial work may have
an impact on expertise and job requirements in some situations (such as during information processing and
decision-making assignments). The integrated work design framework, in particular, suggests many
knowledge and task features that have an impact on critical attitudes, behavioral, intellectual, and well-being
objectives. Knowledge qualities are the challenges that individuals face when completing activities (such as
cognitive demands). In the context of a job, task characteristics refer to the types of tasks that must be
completed and the methods used to do them [17].
Information processing and autonomy, role clarity, and input from the work are particularly significant in
connection to decision support systems. How much information work needs is reflected in the quantity of
information processing required to do it. When it comes to autonomy, it refers to how autonomous workers
are in completing their work and making decisions about how to approach it. Identity of a task specifies
whether or not tasks may be completed in their whole rather than merely focusing on certain sections of them
[17,18]The extent to which workers obtain information regarding their work performance from features of the
job itself is characterized as evaluation from the workplace. Psychological states such as experienced
meaningfulness felt responsibility for work results, and job satisfaction and performance are all affected by
differences in these traits.
There is a possibility that the methods of information processing and decision support used in people selection
have an impact on these knowledge and job attributes. Before hiring managers process candidate data, support-
before-processing systems provide their evaluations of potential employees This might potentially lower the
amount of data processing required (e.g., integrate information, compare applicants). With regard to the
identification of the work, hiring managers may believe that they haven't completed the selection process. The
system already indicates which candidates to prefer, so they may experience a loss of autonomy. Thus, hiring
managers are more likely to favor selection methods where their expertise can be demonstrated, as well as
those that allow for greater autonomy (for example, unstructured interviewing as opposed to structured
interviews; using clinical as opposed to mechanical combination of information).
However, systems that provide help after processing may need more information processing, provide a greater
sense of work identity, and provide a greater degree of autonomy. A key benefit of these tools is that they
provide hiring managers the freedom to do independent analyses and integrations of data before making a
final choice regarding candidates. Additionally, hiring managers may rely on suggestions from systems that
provide feedback after a job has been completed. Managers may assume that an automated system is operating
as planned if they have no experience with it. As a result, if employees trust that an automated system can
accomplish this, they may compare their judgments to the suggestions of the system to obtain a notion of their
performance. This is because the automated selection and recruitment systems are designed to analyze
applicants' job fit. These theories argue for how differences in knowledge and job characteristics impact five
key workplace psychological factors: pleasure, boredom, contentment with decision-making, felt
responsibility, and self-efficacy.
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
45 | P a g e
D. Impacts on satisfaction with the decision
Knowledge professionals (such as hiring managers) spend a significant portion of their time each day digesting
information and making decisions. Overall job satisfaction is likely to be influenced by how happy employees
are with their choices. When the effects and quality of a choice aren't immediately obvious, satisfaction with
decisions is critical. When it comes to hiring new employees, the quality of the choice is directly tied to how
well the new hire will perform in the future. The long-term implications of choices can only be seen by persons
who are persuaded of their conclusion and happy with the decision-making process. People may become more
persuaded and happier with their conclusion if they analyze information independently. Also, if individuals
think they made a good choice (i.e., if they get positive performance feedback), this may boost their
satisfaction.
E. Effects on belief in one's abilities
In terms of self-efficacy, it's the idea that one's talents and capacities will allow one to do well on a certain
activity. Job performance and job happiness are directly linked to one's level of self-efficacy. Task-specific or
generic self-efficacy are both possible. Unlike general self-efficacy, which can be applied to many different
circumstances, particular self-efficacy is very situation-dependent. While overall self-efficacy refers to
participants' belief in their abilities, particular self-efficacy refers to how well participants believe they can do
the selection task at hand [18]. Completing a task on one's own should increase one's overall and particular
self-efficacy. These considerations should be increasingly obvious when taking on more difficult assignments
(e.g., tasks that afford more information processing). Self-efficacy may be boosted by gaining confirmation
of excellent work. To use terms from work design research, only those who used a support-after-processing
system or received no help felt they completed a difficult cognitive task on their own in the no-support group.
The system's information might be interpreted as a measure of how well participants in the support-after-
processing condition performed their tasks. All of this has the potential to improve both a person's overall and
particular self-efficacy [18].
IV. FUTURE IN THE U.S
The future on the impacts of artificial intelligence on workers in the U.S will grow as more companies look
to make their operations faster and increase productivity. According to USC specialists, AI's self-learning and
automated skills offer more systematic and cost-effective data protection, protecting individuals from
terrorism and even small-scale identity theft. Self-driving automobiles are where AI has the most potential to
change the world in the near future. AI drivers can't pay attention to the radio or put on mascara while driving.
Autonomous vehicles have already arrived, thanks to Google, but they will be commonplace by 2030. Boeing
is developing an autonomous airliner and has previously used driverless railways in European cities (pilots
are still required to put info into the system). Doctors and hospitals will be able to better evaluate data thanks
to AI algorithms, which will allow them to tailor their patient treatment to the individual's genes, surroundings,
and lifestyle [18]. Personalized medicine will be ushered in by artificial intelligence, which will do anything
from diagnosing brain tumors to determining which cancer therapy is best for a certain patient.
V. ECONOMIC BENEFITS IN THE UNITED STATES
There are many economic benefits of artificial intelligence at work in the United States. Using artificial
intelligence (AI), companies may boost productivity by focusing on certain job functions and increasing the
value of "human abilities" such as creativity, problem-solving, and numerical skills. Even though artificial
intelligence will boost economic development, the benefits will not be spread equally. Artificial intelligence
(AI) will be beneficial to certain businesses while posing a danger to others. Employment in high-growth
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
46 | P a g e
professions like healthcare, where highly experienced practitioners cannot be replaced by automation, will be
supplemented while jobs in businesses that depend on conventional procedures will be replaced [18,19].
Authorities must seek to close the educational achievement gap between rural and urban individuals and
provide financial assistance to employees who must quit their jobs to obtain new skills. As with healthcare,
other "high-value" service sectors are expected to have modest levels of automation (34%), including
professional, scientific, and technical services [19]. Because of the advantages, AI offers to highly complicated
and specialized professions, it is expected that the percentage of AI-related occupations in the economy will
grow, resulting in more secure and well-paying work for Americans. Assigning computers to do repetitive
work and people to complicated jobs will boost productivity and spur economic development. AI-related
innovations, according to one expert, would boost North America's GDP by $3.7 trillion by 2030. Rural areas
will bear the brunt of the economic advantages, but the rewards will not be spread equally.
VI. CONCLUSION
This paper explored how artificial intelligence will have an impact on future work roles and the recruitment
of employees. This research's findings show the influence AI will have on the nature of work and what
organizations can do to prepare their employees for a digital future. Managerial positions are being displaced
by artificially intelligent and machine learning-based technologies. The study's purpose was to see how various
types of computer-aided information processing and decision support systems alter managers' attitudes to
performing people selection as a distinct administrative function. Automatic decision support systems alter
organizational structures and job responsibilities. It's important for companies planning to deploy these
technologies to think about the impact on their personnel. Support-before-processing systems may improve
efficiency in people selection, but they can also have anchoring effects and cause hurried judgments. As an
alternative, support-after-processing systems can boost job satisfaction and self-efficacy by increasing work
pleasure and happiness with choices. This might lead to happier employees. The effectiveness of such systems
depends greatly on the system's dependability, which is likely to be less than flawless for management decision
support systems.
REFERENCES
1) V. Arnold, P. Collier, S. Leech and S. Sutton, "Impact of intelligent decision aids on expert and novice
decision-makers' judgments", Accounting and Finance, vol. 44, no. 1, pp. 1-26, 2004.
2) J. Brehaut, A. O'Connor, T. Wood, T. Hack, L. Siminoff, E. Gordon and D. Feldman-Stewart, "Validation
of a Decision Regret Scale", Medical Decision Making, vol. 23, no. 4, pp. 281-292, 2003.
3) M. Endsley, "From Here to Autonomy", Human Factors: The Journal of the Human Factors and
Ergonomics Society, vol. 59, no. 1, pp. 5-27, 2016.
4) S. Guerlain, P. Smith, J. Obradovich, S. Rudmann, P. Strohm, J. Smith, J. Svirbely and L. Sachs,
"Interactive Critiquing as a Form of Decision Support: An Empirical Evaluation", Human Factors: The
Journal of the Human Factors and Ergonomics Society, vol. 41, no. 1, pp. 72-89, 1999.
5) J. Hackman and G. Oldham, "Motivation through the design of work: test of a theory", Organizational
Behavior and Human Performance, vol. 16, no. 2, pp. 250-279, 1976.
6) S. Highhouse, "Stubborn Reliance on Intuition and Subjectivity in Employee Selection", Industrial and
Organizational Psychology, vol. 1, no. 3, pp. 333-342, 2008.
7) S. Humphrey, J. Nahrgang and F. Morgeson, "Integrating motivational, social, and contextual work design
features: A meta-analytic summary and theoretical extension of the work design literature.", Journal of
Applied Psychology, vol. 92, no. 5, pp. 1332-1356, 2007.
NOVATEUR PUBLICATIONS
INTERNATIONAL JOURNAL OF INNOVATIONS IN ENGINEERING RESEARCH AND TECHNOLOGY [IJIERT]
ISSN: 2394-3696
VOLUME 5, ISSUE 7, July -2018
47 | P a g e
8) T. Judge, C. Jackson, J. Shaw, B. Scott and B. Rich, "Self-efficacy and work-related performance: The
integral role of individual differences.", Journal of Applied Psychology, vol. 92, no. 1, pp. 107-127, 2007.
9) B. Addad, S. Amari and J. Lesage, "Genetic algorithms for delays evaluation in networked automation
systems", Engineering Applications of Artificial Intelligence, vol. 24, no. 3, pp. 485-490, 2011.
10) D. Barnhizer, "The Future of Work: Apps, Artificial Intelligence, Automation and Androids", SSRN
Electronic Journal, 2016.
11) J. Copeland, "Overview of System Architectural Implications of Third-Party Liability and Government
Indemnification for GPS Augmentation", Navigation, vol. 47, no. 1, pp. 7-15, 2000.
12) J. Ito, "The Future of Work in the Age of Artificial Intelligence", Joi Ito's Web, 2016.
13) S. Nofal, K. Atkinson and P. Dunne, "Algorithms for decision problems in argument systems under
preferred semantics", Artificial Intelligence, vol. 207, pp. 23-51, 2014.
14) Vahdatikhaki and A. Hammad, "Risk-based look-ahead workspace generation for earthwork equipment
using near real-time simulation", Automation in Construction, vol. 58, pp. 207-220, 2015.
15) M. Yuan Zhang and X. Jessie Yang, "Evaluating effects of workload on trust in automation, attention
allocation and dual-task performance", Proceedings of the Human Factors and Ergonomics Society Annual
Meeting, vol. 61, no. 1, pp. 1799-1803, 2017.
16) C. König, U. Klehe, M. Berchtold and M. Kleinmann, "Reasons for Being Selective When Choosing
Personnel Selection Procedures", International Journal of Selection and Assessment, vol. 18, no. 1, pp. 17-
27, 2010.
17) M. Langer, C. König and K. Krause, "Examining digital interviews for personnel selection: Applicant
reactions and interviewer ratings", International Journal of Selection and Assessment, vol. 25, no. 4, pp.
371-382, 2017.
18) J. Lee and K. See, "Trust in Automation: Designing for Appropriate Reliance", Human Factors: The
Journal of the Human Factors and Ergonomics Society, vol. 46, no. 1, pp. 50-80, 2004.
19) Morgeson and S. Humphrey, "The Work Design Questionnaire (WDQ): Developing and validating a
comprehensive measure for assessing job design and the nature of work.", Journal of Applied Psychology,
vol. 91, no. 6, pp. 1321-1339, 2006.
... It is more apparent as the amount of new technological innovations are rising (Sherif & Mohsin, 2021). Among the growth of technology is artificial intelligence (AI) which is notably significant (Battina, 2018). AI is Business Management and Strategy ISSN 2157-6068 2024 capable of imitating human thinking such as decision-making and communication, in which it is prepared to play a crucial role in the future (Zhang et al., 2023;Grosu et al., 2023;. ...
... There has been a rise in the usage of AI in business in recent years (Igou et al., 2023). AI is superior to humans at analysing data and making decisions to predict future trends (Battina, 2018). Therefore, aiding clients in predicting their financial health is crucial to a company's ability. ...
Article
Full-text available
This study examines the impact of artificial intelligence (AI) on the accounting profession. It systematically investigates the impacts in which AI technologies have reformed the accounting field, redefining the roles and responsibilities of accountants. Using literature review, this study sheds light on the impact of AI on the accounting profession. The results of this study mostly found that the impact of AI on accounting profession can be divided into three themes; (i) automation of routine tasks; (ii) enhanced data analysis and (iii) value-added of the professional roles. The automation of routine tasks includes data entry, validation and transaction processing, while for enhanced data analysis, it includes predictive analytics and decision support. AI also has impacted in terms of value-added of the professional roles which comprise of increasing scalability and cost savings and focus on higher value activities. The findings of this study suggest that the accounting profession is evolving in response to AI technology, and accountants should embrace these changes to harness the full potential of AI in their work.
... Source- (Battina, 2018) Wage and Salary Pressure Points: Automation and AI affect wages differently in various sectors, reflecting the different patterns of wage suppression and growth. In industries more directly affected by technology and AI, wages have so far gone up as demand for scarce skills has increased to build and run advanced AI systems . ...
... While it is not within the scope of this paper to discuss the optimal design of collaborative work involving both humans and AI, this important topic is already being explored (Parker and Grote 2019). Current research suggests that the experience of using AI can affect a human worker's experience of predictability, controllability, meaningfulness, and fairness (Parker and Grote 2019, Battina 2018, Langer and Landers 2021, Oh et al. 2018). In addition, by allowing human capability to be augmented by AI capability, there is potential to address skills gaps within the Qeios, CC-BY 4.0 · Article, May 16, 2023 ...
Preprint
Full-text available
Humans and artificial intelligence (AI) systems have complementary strengths. This complementarity creates the potential to improve performance by combining inputs from human and AI on a common task or goal. A systematic review of academic and grey literature was conducted to investigate whether real-world examples of ‘collaborative intelligence’ could be identified. Applications utilising collaborative intelligence had to have (1) complementarity (i.e., the collaboration improves performance beyond that which could be achieved by the human or the AI alone), (2) a shared objective and outcome, and (3) sustained, task-related interaction between human and AI. The literature search yielded 1250 documents but only 16 applications meeting these criteria were identified. Most collaborative applications were at the prototyping stage but they could perform a variety of types of work (creative, industrial, healthcare, emergency response and knowledge work) and delivered a range of benefits (efficiency, creativity and safety). In most applications the AI had a virtual presence but there were examples where the AI also had a cyber-physical form. These early applications reveal the potential for collaborative intelligence to be applied in a range of domains and formats.
... Additionally, as humans tend to be limited in kindness and attention, which they expend on others, artificial bots have the capability of channeling unlimited resources toward building relationships. The machines can also trigger reward centers across human brains, producing content that attracts human attention and causes addictions to games and videos [27]. This leads to diverting human attention toward specific issues and triggers certain actions that may be detrimental to human existence. ...
Preprint
Full-text available
p>The widespread adoption of Artificial Intelligence (AI) models by various industries in recent years have made Explainable Artificial Intelligence (XAI) an active field of research. This adoption can cause trust and effectiveness to suffer if the results of these models are not favorable in some way. XAI has advanced to the point where many metrics have been proposed as reasons for the outputs of many AI models. However, there is little consensus about what technical metrics are most important, nor is there a consensus on how best to analyze explainable methods and models. A discussion of varying attempts at this is brought forth, but the paper also goes into the ethics of AI and its societal impact. Given the modern ubiquity with which AI exists and the immensely multidisciplinary approach, AI has evolved into, using only technical metrics cannot fully describe XAI’s effectiveness. This paper explores several approaches to measuring the ethical effects of XAI, whether it has any bearing on modern research, as well as how the impacts of AI and XAI are measured on society. The full attempt at quantifying XAI models’ effectiveness is explored from a technical and non-technical point of view.</p
... Additionally, as humans tend to be limited in kindness and attention, which they expend on others, artificial bots have the capability of channeling unlimited resources toward building relationships. The machines can also trigger reward centers across human brains, producing content that attracts human attention and causes addictions to games and videos [27]. This leads to diverting human attention toward specific issues and triggers certain actions that may be detrimental to human existence. ...
Preprint
Full-text available
The widespread adoption of Artificial Intelligence (AI) models by various industries in recent years have made Explainable Artificial Intelligence (XAI) an active field of research. This adoption can cause trust and effectiveness to suffer if the results of these models are not favorable in some way. XAI has advanced to the point where many metrics have been proposed as reasons for the outputs of many AI models. However, there is little consensus about what technical metrics are most important, nor is there a consensus on how best to analyze explainable methods and models. A discussion of varying attempts at this is brought forth, but the paper also goes into the ethics of AI and its societal impact. Given the modern ubiquity with which AI exists and the immensely multidisciplinary approach, AI has evolved into, using only technical metrics cannot fully describe XAI's effectiveness. This paper explores several approaches to measuring the ethical effects of XAI, whether it has any bearing on modern research, as well as how the impacts of AI and XAI are measured on society. The full attempt at quantifying XAI models' effectiveness is explored from a technical and non-technical point of view.
Article
This research aims to comprehensively analyze the most essential uses of artificial intelligence in Aerospace Engineering. We obtained papers initially published in academic journals using a Systematic Quantitative Literature Review (SQLR) methodology. We then used bibliometric methods to examine these articles, including keyword co-occurrences and bibliographic coupling. The findings enable us to provide an up-to-date sketch of the available literature, which is then incorporated into an interpretive framework that enables AI's significant antecedents and effects to be disentangled within the context of innovation. We highlight technological, security, and economic factors as antecedents prompting companies to adopt AI to innovate. As essential outcomes of the deployment of AI, in addition to identifying the disciplinary focuses, we also identify business organizations' product innovation, process innovation, aerospace business model innovation, and national security issues. We provide research recommendations for additional examination in connection to various forms of innovation, drawing on the most critical findings from this study.
Article
Full-text available
This review provides a novel examination of the emerging field of collaborative intelligence and demonstrates the value that human-AI teams can deliver. Humans and artificial intelligence (AI) systems have complementary strengths. This complementarity creates the potential to achieve a step-change in performance by combining inputs from human and AI on a common task. We introduce the construct of “collaborative intelligence” and develop a set of criteria, for evaluating whether an AI system enables collaborative intelligence. Applications utilizing collaborative intelligence had to have (1) complementarity (i.e. the collaboration draws upon complementary human and AI capability to improve outcomes), (2) a shared objective and outcome, and (3) sustained, two-way task-related interaction between human and AI. A systematic review of 1,250 AI applications published between 2012 and 2021 was carried out to investigate whether real-world examples of “collaborative intelligence” could be identified. The review yielded 16 AI systems which met the criteria, demonstrating that collaboration between humans and AI systems is possible and that these systems offer a wide range of performance benefits including efficiency, quality, creativity, safety, and human enjoyment.
Article
Full-text available
Digital interviews (or Asynchronous Video Interviews) are a potentially efficient new form of selection interviews, in which interviewees digitally record their answers. Using Potosky's framework of media attributes, we compared them to videoconference interviews. Participants (N = 113) were randomly assigned to a videoconference or a digital interview and subsequently answered applicant reaction questionnaires. Raters evaluated participants' interview performance. Participants considered digital interviews to be creepier and less personal, and reported that they induced more privacy concerns. No difference was found regarding organizational attractiveness. Compared to videoconference interviews, participants in digital interviews received better interview ratings. These results warn organizations that using digital interviews might cause applicants to self-select out. Furthermore, organizations should stick to either videoconference or digital interviews within a selection stage.
Article
Full-text available
The focus of this article is on implicit beliefs that inhibit adoption of selection decision aids (e.g., paper-and-pencil tests, structured interviews, mechanical combination of predictors). Understanding these beliefs is just as important as understanding organizational constraints to the adoption of selection technologies and may be more useful for informing the design of successful interventions. One of these is the implicit belief that it is theoretically possible to achieve near-perfect precision in predicting performance on the job. That is, people have an inherent resistance to analytical approaches to selection because they fail to view selection as probabilistic and subject to error. Another is the implicit belief that prediction of human behavior is improved through experience. This myth of expertise results in an overreliance on intuition and a reluctance to undermine one’s own credibility by using a selection decision aid.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
The analysis offered here is not a Neo-Luddite rage against “the machine.” As with the oft-stated reproach about paranoia, there sometimes really are situations in which people are “out to get you.” In our current situation the threat is not from people but from the convergence of a set of technological innovations that are and will increasingly have an enormous impact on the nature of work, economic and social inequality and the existence of the middle classes that are so vital to the durability of Western democracy. The fact is that developed nations’ economies such as found in Western Europe and the US are facing a convergence of technologies that fit into Joseph Schumpeter’s idea of “creative destruction” but without the “creative” phase of economic rebirth. The forces and technologies pushing us in this direction are relentless. In a globalized market economy where power and authority are dispersed across borders with nations holding incompatible interests and agendas and policy dictated by unaccountable multilateral institutions we lack the ability to impose limits on what is occurring even if we wanted to. This discussion is only peripherally about law schools and lawyers. Those two institutions are nothing more than derivative manifestations of what is occurring in our larger systems rather than the drivers or creators of economic and political forces. As US law schools experience a dramatic downward shift in applications and enrollments, concerned and increasingly panicked law faculties at many institutions are looking in the wrong direction and at the wrong factors in trying to determine their future. This is because anyone attempting to tease out strategies by which we can adapt to economic change by designing positive plans of action based on past cycles and workplace conditions is chained to a bench in Plato’s Cave — mistaking flickering shadows for concrete reality.
Article
The construction industry accounted for more than 17% of fatal work injuries, i.e., 806 counts of death, in the U.S. in 2012. Approximately 75% of struck-by fatalities in the construction industry are reported to have been caused by heavy equipment. Researchers have addressed the need for the enhanced safety of earthwork equipment in two different streams, namely using advanced planning methods to avoid overlaps between the workspaces of different activities of equipment or using real-time tracking technologies to avoid the collision between equipment in the immediate future. However, none of these solutions enables the equipment to reliably predict the operation of other pieces of equipment for a long-enough time window to find a collision-free path using path re-planning. Accordingly, the present paper proposes a novel method to generate risk maps based on the integration of the pose and state data of the equipment with near-real-time simulation and considering the proximity-based and visibility-based risks. These risk maps are used to define dynamic workspaces that can in turn be used to perform path re-planning in a timely manner. The proposed method is implemented and tested in a case study. In light of the results of the case study, it is found that the proposed method is providing a reliable basis for the safety analysis of earthwork sites by generating workspaces with different levels of risk that can be used to provide timely alerts to different equipment and crews.
Article
Increased third-party tort liability resulting from GPS augmentation is an important issue to consider in selecting a system architecture. There are different liabilities for the U.S. Government, private and commercial parties, and foreign partnership countries. The paper explores the extent of liability by reviewing the relevant treaty, statutory, and case law. Five alternative system ownership scenarios are presented, and the impact of third-party liability is assessed for each. The conclusion reached is that the U.S. Government has significant liability exposure; however, private and commercial system owners have even higher exposure.
Article
For Dungʼs model of abstract argumentation under preferred semantics, argumentation frameworks may have several distinct preferred extensions: i.e., in informal terms, sets of acceptable arguments. Thus the acceptance problem (for a specific argument) can consider deciding whether an argument is in at least one such extensions (credulously accepted) or in all such extensions (skeptically accepted). We start by presenting a new algorithm that enumerates all preferred extensions. Following this we build algorithms that decide the acceptance problem without requiring explicit enumeration of all extensions. We analyze the performance of our algorithms by comparing these to existing ones, and present experimental evidence that the new algorithms are more efficient with respect to the expected running time. Moreover, we extend our techniques to solve decision problems in a widely studied development of Dungʼs model: namely value-based argumentation frameworks (vafs). In this regard, we examine analogous notions to the problem of enumerating preferred extensions and present algorithms that decide subjective, respectively objective, acceptance.
Article
The scientist–practitioner gap in personnel selection is large. Thus, it is important to gain a better understanding of the reasons that make organizations use or not use certain selection procedures. Based on institutional theory, we predicted that six variables should determine the use of selection procedures: the procedures' diffusion in the field, legal problems associated with the procedures, applicant reactions to the procedures, their usefulness for organizational self-promotion, their predictive validity, and the costs involved. To test these predictions, 506 HR professionals from the German-speaking part of Switzerland filled out an online survey on the selection procedures used in their organizations. Respondents also evaluated five procedures (semi-structured interviews, ability tests, personality tests, assessment centers, and graphology) on the six predictor variables. Multilevel logistic regression was used to analyze the data. The results revealed that the highest odd ratios belonged to the factors applicant reactions, costs, and diffusion. Lower (but significant) odds ratios belonged to the factors predictive validity, organizational self-promotion, and perceived legality.
Article
A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motivated work behavior to develop; (b) the characteristics of jobs that can create these psychological states; and (c) the attributes of individuals that determine how positively a person will respond to a complex and challenging job. The model was tested for 658 employees who work on 62 different jobs in seven organizations, and results support its validity. A number of special features of the model are discussed (including its use as a basis for the diagnosis of jobs and the evaluation of job redesign projects), and the model is compared to other theories of job design.