ArticlePDF Available

How to measure technology intelligence?

Authors:

Abstract and Figures

Technology intelligence (TI) is an activity that supports decisionmaking at many levels. However, practitioners often find that evaluating the quality of TI activities can be very challenging. Whilst several papers in current literature discuss performance assessments in innovation contexts, less research specifically addresses the issue of performance measurement for TI. This paper aims to start to fill this gap by developing empirical evidence about the current evaluation methods adopted in industry, and the challenges posed by the metrics used in assessing TI. A framework is proposed, which suggests that the metrics used for TI follow two logics: The first is that they are activity- or outcome-based, and the second is that they apply either to specific projects or to the entire firm. This classification of metrics could help practitioners structure their future measuring and evaluating strategies.
Content may be subject to copyright.
I
nt. J. Technology Intelligence and Planning, Vol. x, No. x, xxxx 1
Copyright © 2016 Inderscience Enterprises Ltd.
How to Measure Technology Intelligence?
Ying Wan Loh*a,
Email: yingwanloh@cantab.net
Letizia Mortaraa
Email: lm367@cam.ac.uk
Institute for Manufacturing, Department of Engineering, University of Cambridge, 17
Charles Babbage Road, Cambridge CB3 0FS, UK
*Corresponding author
Abstract: Technology intelligence (TI) is an activity that supports decision-making at many
levels. However, practitioners often find that evaluating the quality of TI activities can be very
challenging. Whilst several papers in current literature discuss performance assessments in
innovation contexts, less research specifically addresses the issue of performance measurement
for TI. This paper aims to start to fill this gap by developing empirical evidence about the current
evaluation methods adopted in industry, and the challenges posed by those metrics in assessing
TI. A framework is proposed, which suggests that the metrics used for TI follow two logics: the
first is that they are activity- or outcome-based, and the second is that they apply either to
specific projects or to the entire firm. This classification of metrics could also help practitioners
structure their future measuring and evaluating strategies.
Keywords: Technology intelligence; technology scouting; performance measure; assessment;
evaluation; metrics; decision-making; open innovation; information quality; innovation
management measurement; efficiency; effectiveness.
Y. W. Loh, L. Mortara
Biographical Notes: Ying Wan Loh completed her Master’s programme (MPhil) in Industrial
Systems, Manufacture and Management at the Institute for Manufacturing, University of
Cambridge in 2014. This work was part of her Master’s dissertation. Prior to starting the
Master’s programme in Cambridge, she graduated from the University of Glasgow with a BEng
(First Class Honours) in Mechanical Design Engineering. She is currently working for a British
aerospace company.
Letizia Mortara is a Senior Research Associate at the University of Cambridge and a By-Fellow
at Churchill College, Cambridge. She is also an Associate Editor for the R&D Management
journal. She has worked within the Centre for Technology Management at the Institute for
Manufacturing since 2005. Prior to this, she gained her first degree in Industrial Chemistry at
the University of Bologna in Italy. After spending three years working as a process/product
manager in the chemical industry, she moved to the UK where she gained her PhD in processing
and process scale-up of advanced ceramic materials at Cranfield University. Letizia’s research
focuses on understanding how companies implement open innovation and keep abreast of the
latest developments in technology. She is also currently focusing on additive manufacturing
technologies (3D Printing) in manufacturing and their implications for business.
How to Measure Technology Intelligence?
1. Introduction
Technology intelligence (TI) is defined as the capture and delivery of technological
information as part of the process whereby an organisation develops an awareness of
technology threats and opportunities(Kerr et al., 2006, p.75). TI responds to a broad set of
decision-making needs (from strategic to operational), as it helps a firm become aware of
important developments in technologies (Kerr et al., 2006) . Amongst other activities, TI could
support innovation processes and, for instance, enable the identification of prospective partners
with interesting technological knowledge (Mortara et al., 2010), or could be used to identify
technology commercialisation opportunities (Rohrbeck, 2007).
The importance of having evaluation approaches in place for intelligence systems has been
recognised across several literature domains. Neely et al. were amongst the first to think about
the measurement of technological information and illustrated, with a case study, that technology
assessment forms could be used to identify emerging technologies (Neely et al., 1997). The
competitive intelligence (CI) literature i.e. the literature about the process of developing
actionable foresight regarding competitive dynamics and non-market factors that can be used
to enhance competitive advantage (Prescott, 1999, p.42) – suggests that performance
indicators are important, because CI could absorb considerable budget and, hence, managers
would be interested in proving that the intelligence function is making a contribution to the
company’s performance (West, 2001). This notion can also be applied to TI (Davison, 2001),
and appropriate evaluation would offer some form of protection to intelligence practitioners
when the next round of redundancy is being considered (West, 2001). Similarly, in the field of
business intelligence (BI) - which regards a system that combines data gathering, data storage,
and knowledge management with analysis to provide input to the decision process (Negash &
Gray, 2008, p.175) - Lönnqvist & Pirttimäki argued that measuring BI is useful to both manage
the BI process and to determine its value (Lönnqvist & Pirttimäki, 2006). Although these
authors proposed steps to design performance measures, they also admit that actual performance
measures have not been presented and the literature lacks input from the real world. In the TI
field, Kerr et al. pointed out that TI metrics should measure both the quality and value of TI,
and went on to propose using measures derived from the information quality (IQ) literature to
assess TI (Kerr et al., 2006). However, they did not test whether these metrics could be
practically implemented.
Y. W. Loh, L. Mortara
Instead, most works discussing metrics do so in other contexts. For instance, scholars studied
innovation performance measurement (Adams et al., 2006; Dewangan & Godse, 2014), whilst
a number of recent papers have specifically addressed performance measurement issues for
Open Innovation (OI) (Rogo et al., 2014; Huizingh, 2011; Enkel et al., 2011; Chesbrough,
2004). However, although the search for technological partners through TI is an important step
in OI (Mortara et al. 2010), research has not yet discussed how companies evaluate the impact
on the information gathered through intelligence for OI.
Hence, this paper aims to start filling this gap with an empirical study which reviews the
measurement approaches companies currently adopt for TI and the practical challenges in this
respect.
This paper is organised as follows: Section 2 presents the literature review on performance
measurement in TI, and an analytical framework based on existing theory; Section 3 describes
the research methodology, and Section 4 presents the data obtained from the interviews with
professional managers involved with TI; Section 5 derives a framework for the assessment of
TI in practice, and discusses its implications for theory and practice; finally, Section 6
summarises the conclusions and the limitations of this work.
2. Literature Review
In literature, many different terms are used to describe technology intelligence (TI), such as
technology monitoring, technology forecasting, technology scanning and technology
assessment (Lichtenthaler, 2004a; Kerr et al., 2006). However, they all refer to the knowledge-
gathering and dissemination process, where technology is the main topic of concern. In reality,
every company practises TI in a different and unique way to fulfil their business needs: Reger
argued that many TI activities are carried out informally by gatekeepers (Reger, 2001) while
Lichtenthaler proved that TI could be organised in layers of structural, hybrid and informal co-
ordination (Lichtenthaler, 2004a). Industry (Lichtenthaler, 2004b) and country-specific
(Mortara et al., 2009) reviews show that, activities included under the umbrella of TI range,
from the development of scouting networks (Mortara et al., 2010; Rohrbeck, 2010) to the
establishment of document and patent mining tools (Lee & Mortara, 2012), or the setup of calls
for information via idea competitions (Mortara et al., 2013), or working with external
intermediaries (Chesbrough, 2006; Jeppesen & Lakhani, 2010).
How to Measure Technology Intelligence?
Intelligence provision is important to decision-making and strategy-formulating, which is why
performance measurement and success factors for TI require attention (Dinter, 2013; Mortara
et al., 2009). The majority of the literature on TI mentions the importance and the need to control
and assess TI (Kerr et al., 2006; Mortara et al., 2009; Rohrbeck, 2007) but, so far, very few
scholars have delved into the details of how this is, or should be, done. In particular, several
measures of performance are available across different streams of literature, including the
foresight (Battistella, 2014), the innovation management (Adams et al, 2006) or the information
quality (Stvilia et al., 2007) literature, but there is not yet a consolidation of this knowledge.
This work has the ambition of such a consolidation, pursued by studying directly those faced
with the task of measuring TI in practice.
The structure of the literature review is as follows: the general performance measurement
literature is reviewed in Section 2.1; performance measures specifically in TI are reviewed in
Section 2.2; an analytical framework for measuring TI is illustrated in Section 2.3. Its purpose
is to structure the findings from the literature review and act as a guide in the data collection
and analysis phase.
2.1 Why should management practices be measured? Performance measurement in the
management literature
Performance measures are a way of communicating top-level strategies across a company. They
are also a means to quantify the efficiency and effectiveness of actions and processes (Neely et
al., 2005; Flapper et al., 1996; Neely et al., 1997). For others, the fundamental function of
metrics is to exercise control, communicate and improve performances (Melnyk et al., 2004).
Performance measures are created, for example, when prompted by a business need, for audit
purposes and to model activities, or for benchmarking reasons (Bourne & Neely, 2003).
Traditional performance measures are used to quantify the financial performances of a company
(e.g. Keegan’s Performance Measurement Matrix (Keegan et al., 1989)). However, metrics
could also be developed to measure seemingly fuzzy processes, as demonstrated by the Quality
Evaluation Framework (Heidari & Loucopoulos, 2013). The literature describes the complexity
of developing and implementing good performance measures (Neely et al., 2000; Neely, 2002;
Neely, 2006).
Simons has shown that companies with different management control systems used different
evaluation and reward strategies (Simons, 1990) and literature has shown that performance
measures influence people’s behaviour in both positive and negative ways. On one side, well-
Y. W. Loh, L. Mortara
communicated metrics could change people’s behaviour and help achieve a common business
goal. However, performance measures could also encourage short-termism where people lose
sight of the longer-term objectives in order to achieve good scores on short-term measurements
(Neely et al., 2005). TI managers should be particularly careful and try to avoid such shortfall,
because intelligence work does not usually bear immediate results or rewards (Wheaton &
Chido, 2007). Neely et al. state that the key issue in designing performance measures “is that
they have to match the organisation context” (Neely et al., 1997, p.1135). Hence, the type of
performance measurement to be chosen needs to link with the organisational culture. Within a
clan culture an informal control strategy is most effective (Büschgens et al., 2013). On the
contrary, Wynen argued that hard forms of control, which entail performance-related awards,
induces higher levels of innovation-oriented culture (Wynen et al., 2014). Such control of
organisations’ activities could be achieved through setting standards, monitoring processes and
results, which indirectly control behaviours and output, possibly with the use of metrics and
indicators (Grieves, 2010). Enforcing a performance measurement system on innovative
functions such as TI should be carefully considered, so that it does not interfere with innovative
cultures, considering that to capture and share new knowledge requires a certain degree of
openness and flexibility (Durst & Ståhle, 2013). Besides, managers need to ensure that the goals
set for the TI team fit the long-term technology strategy of the company (Melnyk et al., 2014).
2.2. What do we know about measuring TI?
Intelligence is essentially the gathering and dissemination of information for different purposes.
Therefore, works relevant to performance measurement are found in different streams of
literature. When talking about information and knowledge management evaluation, this is often
done in terms of efficiency and effectiveness (Neely et al., 2005) and these criteria are adopted
to evaluate information systems for foresight (Battistella, 2014).
Efficiency is substantially an evaluation of the TI process whereby, according to the foresight
and IQ literature, cost, time, methodology employed, ethics and rigour feature as key metrics
(Battistella, 2014). Effectiveness instead refers to the outcome of the process and the value of
the results (Battistella, 2014). Across the streams of literature, the outcome is measured in
different ways. It has been suggested to measure the impact of intelligence on the firms’ overall
knowledge which could be stored in tacit or explicit form. For the explicit part, knowledge
repositories (the codified knowledge inside a database) can be evaluated in various ways
(Adams et al., 2006). For the tacit part, Battistella summarises metrics such as “number of new
How to Measure Technology Intelligence?
networks formed” or “changing existing institutions and building partnerships among actors”
under the category of “social capital and people” (Battistella, 2014).
When TI is used primarily to support innovation processes, it can be measured on its impact on
the firm’s innovation strategy capability (Adams et al., 2006). Hence, the results of searching
for information outside can also be measured according to the innovativeness of the organisation
as a consequence of using intelligence. Although metrics such as “rate of innovative product
launched” have been suggested, the challenge for practitioners (e.g. the technology scouts) is
that it could be difficult to link the final result, such as the revenue generation from innovative
products, to a TI project (Melnyk et al., 2014). The success of innovation projects depends on
other business functions, and TI knowledge could face a variety of obstacles to get to the market
(Rohrbeck, 2007; Kaplan & Tripsas, 2008). Others suggest using the “rate of ideas generated”
as a result of searching outside (Chiesa et al., 1996; Salter & Ter Wal, 2014).
The information Quality (IQ) literature concentrates instead on measuring the outcome of TI in
terms of its intrinsic value – i.e. the quality of information itself. As organisations are
increasingly looking at ways to measure and improve IQ within their businesses, research on
IQ has increased significantly (Lee et al., 2002). Several frameworks have been proposed for
IQ measurement purposes (e.g. Lee et al., 2002; Stvilia et al., 2007; Woodall et al., 2013);
nevertheless, they are mostly quantitative metrics popular within the computer science
literature, and are scarcely mentioned within the TI literature (with few exceptions e.g. Kerr et
al., 2006). In this field, Naumann & Rolker categorised IQ evaluation into three assessment
classes (subject, object and process criteria) and developed a set of IQ metrics for each class
(Naumann & Rolker, n.d.). This aligns with the TI operating cycle: where the subjects are the
intelligence practitioners and the decision-makers, the object is the TI message and the process
links to the various steps of the TI cycle (identify, coordinate, search, filter, disseminate and
decide) (Kerr et al., 2006). Along these lines, the metrics proposed for the intelligence messages
generated by the intelligence process focus on the quality of the (competitive (West, 2001) and
technology (Kerr et al., 2006)) include for instance “accuracy”, “depth”, “relevance”,
“responsiveness” and “timeliness.” In addition, other IQ dimensions are “intrinsic value”,
“accessibility”, “contextual” and “representational” value (Wang, 1998). Each category
includes IQ metrics, for example, the “intrinsic information” category includes metrics such as
“accuracy” or “objectivity.” The literature on IQ lacks, however, an explanation on how such
dimensions could be applied in the real world to assess intelligence insights developed by the
overall TI systems.
Y. W. Loh, L. Mortara
2.3. An analytical framework for measuring TI
[Insert Figure 1 about here]
Figure 1 presents an analytical framework summarising the works reviewed so far across the
different streams of literature. Elements of performance measure (such as process and outcome)
were used in the TI context. This framework was to be employed to guide the design of
interview questions. The figure describes different points (and associated criteria) to measure
TI:
1) The TI cycle (process) (Kerr et al., 2006). These metrics measure the TI activity;
2) The TI process outcome, i.e. the TI message. In this case the judgement is at the level of
the knowledge generated and the measures proposed by the IQ literature seem relevant;
3) The ultimate use of the intelligence, i.e. decision-making process. In this case the
judgement is about the impact of using TI.
The first point could be associated with the concept of efficiency; the latter two measuring
points refer to the effectiveness of TI.
With this framework in mind, we set out to gain insight into current TI evaluation methods in
industry, to understand the challenges associated with measuring TI in practice and to create a
framework to incorporate the empirical research into theory.
3. Research methodology
This section describes the research methodology of this work. It starts with the methodology
selection, research design and case selection. The sampling strategy for the cases is further
elaborated followed by the interview design. Finally, the method of analysis is outlined.
This work used a qualitative and exploratory approach, given that there was little prior research
in performance measurement in TI. The literature review was performed to establish a high
level fundamental understanding of the topic, with a proposed analytical framework to structure
the data analysis. Data was primarily collected through semi-structured interviews
complemented by documents provided by the interviewees. We chose to engage in semi-
structured interviews as “interviews provide opportunities for mutual discovery, understanding,
reflection and explanation […] and elucidate subjectively lived experiences and viewpoints
(Tracy, 2013, p.132). This method is most suited for inductive empirical study (Eisenhardt &
How to Measure Technology Intelligence?
Graebner, 2007). Further, a focus group was organised, with the scope to enrich the
understanding of the topic through discussions with participants. The synergistic group effect
in focus groups can generate a larger number of ideas through the interactions and stimulated
discussions (Stewart & Shamdasani, 1990). It is this group energy and diversity of opinions that
distinguishes focus group interviews from conventional one-to-one interviews (Berg, 2001;
Chiu, 2003). The data from the focus group complemented and triangulated the data collected
from case studies. Hence, we employed a mixed methods research approach (Johnson &
Onwuegbuzie, 2004), where case study was chosen as a primary method, and the focus group
discussion as a complementary addition to the research methodology. A nested arrangement
was chosen where the focus group was nested within the case study method (Yin, 2006).
Interviewees were preferentially accessed through a research consortium, and networks, from
the Centre for Technology Management (CTM), University of Cambridge. The research
consortium is a collaboration between CTM and industrial partners from multinational
companies in a variety of sectors. The managers participating are involved in innovation and
technology management issues within their firms and range in hierarchy (from top management
to operational managers). The consortium follows an engaged scholarship philosophy (Van De
Ven, 2007) and aims to conduct practice-oriented research, and share experience between
academia and industry. This was a pragmatic choice, guaranteeing access for direct interviews
and identifying suitable interviewees. In addition, contacts were accessed through the authors’
personal network. To be considered for the interview, companies should:
Have more than 800 employees
Carry out (technology) intelligence activities
Operate in a technology-intensive environment
Operate internationally
The preferred interviewees should:
Have the role of technology managers or technology scouts
Have more than three years of experience conducting TI
The selection of interviewees followed a theoretical sampling strategy (Patton, 1980; Glaser,
1978). This was chosen because it offers the advantage of strengthening the rigour of the cases
to generate theory (Coyne, 1997). Diversity in the sample was sought as the interviewees
included a number of multinational corporations operating in different industries, a research
institute, a government agency and a military department in the defence sector.
Y. W. Loh, L. Mortara
A total of 12 case study interviews and one focus group were carried out (where eight managers
who were not involved in the interviews participated) (see Table 1).
[Insert Table 1 about here.]
Interviews were carried out following a semi-structured protocol whereby the main questions
were set out in advance, but the interviewer was free to expand over topics of interest that
emerged during the session. All interviews lasted one hour. The questions were sent to the
interviewees ahead of the interview, and follow-up documents were collected after the session.
Four of the 12 interviews were conducted in person, and the rest were carried out over the
phone. All the interviews were recorded with the consent of the interviewees.
The interview questions were designed to cover these main sub-topics:
What is the context for TI in your organisation? Why is it carried out?
What kind of knowledge management system is used to store and share information/
intelligence?
How is TI measured (e.g. explicitly or implicitly)?
What are the challenges in trying to measure TI?
How important it is to measure TI for you/your firm and why?
After the interviewees discussed the measures used in their companies to assess TI, they were
explicitly presented with a list of IQ metrics. They were asked to discuss and elicit the IQ
metrics that they considered important for TI information.
The focus group lasted 1.5 hours, and was conducted using a series of open-ended questions
similar to the ones used in the interviews. The discussion was recorded. The important points
were captured on post-it notes, which were rearranged in clusters during the workshop. The
findings and outcomes of the discussion were summarised and presented back to the
interviewees at the end of the session for validation.
For the within-case analysis, the transcriptions from the interviews were coded and organised
according to patterns in the statements. We then found relationships between different codes
(e.g. “activity metrics”, “measure the process”, and “how we go about it”). A cross-case analysis
was performed to compare and contrast the results from each case. Analytic manipulation
techniques were used; for instance, putting information into different structures, creating data
displays and matrices, and tabulating the frequency of codes occurrence across the different
How to Measure Technology Intelligence?
interviews (Miles & Huberman, 1994). To choose the important elements of the framework, we
looked for reinforced agreement across the evidence. For instance, when more than half of the
sample mentioned a time-related dimension such as “short-term vs long-term or “project-
specific vs non project-specific”, it was decided that this factor should be incorporated in the
framework. The data from the interviews were also compared to the focus group, to ensure a
level of consistency in the results.
This process was repeated iteratively to construct a theoretical framework (Eisenhardt, 1989).
After the framework was fixed, we placed it within the context of previous theories and research
findings, and discussed its significance and limitations.
4. Results
Section 4.1 discusses the interviewees’ opinion about the need to measure TI; Section 4.2
categorises the metrics currently used in practice under the "activity vs outcome" (Section 4.2.1)
and "project vs firm" (Section 4.2.2); Section 4.3 reports the challenges associated with the
metrics suggested in Section 4.2. Data from interviews are indicated as C1 to C12. FG stands
for Focus Group.
4.1 The need for TI measures
Many interviewees expressed a need for establishing performance measures in their TI
processes (e.g. C1, C3, C4, C5), whilst others rejected the notion of quantifying their work in
TI (e.g. C4 and C9). One of the main reason for measuring TI is to justify the investment of
resources in TI. For example, “We have to be able to demonstrate some level of value,
particularly as we move into using funding and resources to bring in these technologies” (C5).
C10 had in the past needed to prove the added value of TI when he requested resources: We
have made several attempts over the years, largely to convince the management that what we
are doing is worthwhile. The most recent effort we put into monitoring quality, value and
usefulness, was brought up when we needed to start recruiting… the executive board required
us to produce a report to justify this. That was not the first time we have gone through an
exercise trying to measure the value [of TI]… you do get people in management who think that
everything is on the internet, where you can just Google [it]. We are trying to prove that in fact
it is not the case.
Y. W. Loh, L. Mortara
Interviewees who were not working in an environment governed by any formal performance
measure recognised their potential use for instance to track progress, C4: [..]metrics can be
useful for different things; for example, understanding how valuable is being a certain
intermediary you are using, what interaction are you getting with them and what is the
conversion rate for things that are looking promising. It will help you know how efficient that
route is. So it is not [only for] measuring the value, but almost the efficiency of what you are
doing and [whether you] should be doing something different. Another reason could be to
communicate their current position to the management, C10: In general, we are not very
worried about performance measurement, until one day the manager turns around and ask 'how
you’re doing?'”. Lastly, performance measures are seen as a way to create a structure and some
level of control when the function gets larger, C8: “It’s not formally monitored at the moment,
if it was bigger we would start to make this more managed. and C2: “Because you still need to
link back to the core business, you can’t just let people search… at some point when it is
growing, you have to manage and structure it.
4.2 Metrics used
Table 3 in the appendix presents an overview of the key metrics reported by the interviewees.
4.2.1 Activity vs Outcome-based metrics
Many interviewees have pointed out that for TI it would be more sensible to measure the process
(the level of activity), rather than its outcome (the quality and value of the intelligence). For
example, “In TI [...] measuring the results would be hard. Because it is a new technology and
you don’t know where to go. Maybe [we should] measure the processes (C3).
Many (C1, C4, C5, C7, C11, C12) propose the “number of leads” as a good proxy measure to
demonstrate the level of TI activities. We have the number of ideas being generated, the
number of leads that exist and the number of leads that feed into the whole process of seeking
(C5). Others offered different activity metrics: “Network gained” (C2, C3, C4, C5, C6, C8,
C11), “speed to decision-making” (C5, C10, C11) and “geographical coverage” (C4, C5, C6,
C8). There are also others who offered a set of generic items, such as “[I monitor] weekly posts
and I [..] go through everything we’ve come across” (C8). C5: [...] how many people are
getting access, are reading our seeking materials is something we are looking at [as a metric].
How to Measure Technology Intelligence?
In parallel, companies showed an interest in measuring the outcome of the TI process (as in its
impact on the decision-making). For example, “We are trying to see the launches year by year,
how many launches have had an OI component [i.e. partner leads brought in by the technology
scout], and the sort of percentages, but that depends on different categories, and they vary year
to year, so we are still learning, how that metric can tell us more about the value of the [OI
activity] into each one of the categories of the turnover.” Similarly, C11 evaluated TI on the
basis of the value of the projects brought in (number of technology start-ups): My individual
target is to focus on certain types of projects that are classified as high value, medium value
and low value. The ones that are high value are obviously what everyone is more interested in
bringing in, and my targets are focused around the high value end(C11). C10 has engaged
with the intelligence customers to determine the quality of their intelligence output: We ran
surveys among our users to ask them about the quality of the database, how useful it is to them,
how much project money it has earned for them. So that’s looking from a user’s point of view,
how much value they’ve got out of it.
C1 acknowledged the difficulty of measuring outcomes, but remained convinced that it should
be the way forward: “It is more difficult to measure TI outcomes, but in the end it makes more
sense to start measuring outcomes, when you have functions [TI] that are very well established
and embedded into the way of working.” However, not everyone agreed: Because whether you
succeed or not [in TI] is not a rule of physics, it [the technology/project] may be too expensive,
it may be technically impossible, so we are not measured on the success or outcome, we are
measured on how we go about it” (C9).
For the judgment of TI’s outcome, interviewees were asked to comment on the IQ dimensions
that were deemed important during their TI work. Figure 2 reveals the spread of the comments
received and Table 2 summarises the comments related to these dimensions.
[Insert Table 2 about here]
[Insert Figure 2 about here]
4.2.2 Project vs Firm-based metrics
Interviewees often discriminated between short- and long-term evaluations of TI. In most cases,
their TI activities are carried out within projects (e.g. “All of our activities are project-based
(C7)). Therefore, interviewees expressed the need to assess TI within a project’s point of view.
Y. W. Loh, L. Mortara
In that case, TI needs are more immediate and short-term (“[One evaluation is about the]
number of papers I read for this TI project” (C6)). However, the longer-term impact of TI on
the whole business was also mentioned (e.g. Impact on revenue (C1)). Across the
interviewees, the timeframes for TI projects varied largely, depending on the type of roles and
the product/technology lifecycle (a short-term project for C11 is a few months while for C2 it
is 3-5 years).
4.3 Challenges in measuring TI
There was disagreement on the best way to evaluate TI between those involved. One of the
main challenges is that many TI processes are not formalised or standardised enough to be
measured: This is not an entirely standardised process, so the results could depend on who
you talk to” (C2). The subjectivity or the personal clout of the TI officers were mentioned as
common issues (C2, C3, C5, C6, C7, C8, C12). For example: When you are established…
people will listen to you, if you are not then it is very difficult (C6).
Interviewees pointed out that performing well in activity metrics does not always guarantee a
good TI outcome: It is not entirely appropriate to use activity metrics as a representation of
value creation (C4). The opposite was also mentioned, because it is not fair to assess scouts
purely on value creation metrics, because it takes too long for things to go through the pipeline
and there are so many other factors that are going to influence what happens to it (C4).
It is important for managers to monitor both short- and long-term performances simultaneously
and maintain a portfolio of perspectives in TI. In the interviews, they reported a tendency
towards focusing on the immediate project needs and the risk of losing sight of the longer-term
implications. For instance: “The obvious problems we have with [TI] is when someone tells us
‘this [project] is a problem’, then we give them priority, spend a lot of time investigating it and
in the end [the result is that] there’s not enough to make this [project] worthwhile (C1). Short-
termism and the risks of formalising the metrics came up also during the interview with C9:
Tell me how you measure me and I will tell you how I behave.” The concern for putting metrics
in TI is that people will “work to the number and not the spirit of the idea behind the number
(C9) especially when it is linked to performance reviews and remunerations.
Focusing too much on the opposite (long-term view, informal) could also harm the TI function,
as mentioned by C4, whose TI function was discontinued when the company suffered from
financial issues: “[…] so you want a balance across [the spectrum], a portfolio of long-term to
How to Measure Technology Intelligence?
short-term things. [..] In hindsight we probably had too many long-term things and not enough
short-term things, to be a proof of the value.
Finally, measuring TI is hard because it is “trying to prove the counter-factual(C2). In some
cases “you cannot justify [a TI activity] unless you implement it, and see the gap of having it or
not [having it]” (C2). In response to this problem, some firms (e.g. C10) tried to implement a
customer satisfaction survey to demonstrate more directly the value of TI in financial terms.
With the exception of C2 and C10 (who rely on sophisticated infrastructure for knowledge
management), comments were received relating to relying on an informal, network-based
mechanism to identify the internal TI knowledge: “One of the challenges that we have is ‘how
do we pull out the information from people? Who are the people with access to a worldwide
forum? Who goes to international conferences all the time?’ ” (C7). In most cases, some form
of knowledge management system or IT tool existed, but they still relied on personal networks
to track down information within the firm (C3, C4, C5, C6, C8, C9, C12). "[the knowledge
management system] is very people-based, not system-based. There is no single page I could
go to, to find information on what has been done before or when. [To know] what work has
been done, I have to know the people and go and talk to them.”, C9. C12 agrees: “We have lots
of lessons learnt. I don’t think we tap into them. We have lots of pressure to commit to
timescales, so I think sometimes we over-commit and under-deliver. It is about trying to get the
correct balance. And that’s the thing, it is about how do you get the best of out these systems
and how do you share things between stakeholders. We don’t analyse enough why some things
don’t work and use that as a mechanism to improve.”
5. Discussion
The main objective of this paper is to gain insights on current assessment practices in TI, and
to develop a framework to support TI evaluation. To this end, we have extracted from interview
data formal and informal measures of TI currently used in industry. This section analyses the
results and discusses the implications. Starting with the analysis for the need for measuring TI,
this section discusses the pros and cons of the metrics used, and summarises them in a
framework.
Y. W. Loh, L. Mortara
Although respondents were sometimes not completely convinced of the needs for TI metrics,
we have noticed that, in practice, TI performances were measured and evaluated both explicitly
or implicitly, and this was in alignment with what illustrated in literature (Neely, 2002).
Consistent with what described in the literature (see Figure 1)(e.g. Wheaton & Chido, 2007),
we also observed that practitioners separated metrics into activity- and outcome-based. Activity
metrics were used to communicate the level of activity within the TI function, and as an
indicator of the efforts and progress of the TI process. In this case, the knowledge management
processes, idea generation process, the knowledge repository or the information flow (Adams
et al., 2006) were the objects of assessment. For the outcome metrics, against what suggested
in literature (Kerr et al., 2006), with the exception of those who dealt with computerised
knowledge repositories and information systems (e.g. C10), we did not find that managers
evaluate the specific insight - the outcome of the TI process. This was in contrast with
practitioners' desire for a clear distinction so that the value of TI would not be unfairly judged
by what appears to be the outcome of the decision. When specifically asked to comment on a
list of IQ metrics, they had a variable interpretation of the meaning of each metric. “Credibility”
was a particularly subjective metric, dependent on the culture of the organisation and the
characteristics of the recipients of the message. West proposed to measure competitive
intelligence (CI) with “accuracy”, “depth”, “relevance”, “responsiveness”, “timing” and
“comprehensiveness” (West, 2001). However, “relevance”, “credibility” and “accessibility”
were indicated as important by TI officers and only “relevance” matches the CI evaluation
criteria. “Timeliness”, which was also mentioned by the foresight literature (Battistella, 2014),
was considered less important. None of the interviewees chose “completeness” as an important
metric in TI. This perhaps was due to the different nature of TI and CI: in CI there is a higher
possibility of obtaining comprehensive information about the competitors, but it is much harder
to obtain a complete picture of emerging technologies worldwide. Another IQ metric which
seems to be highly subjective, and dependent on the culture, is “verifiability”, as the comments
received showed that some interviewees meant the verifiability of the source of information,
whilst others meant how believable that information is for the decision-maker.
In many other cases, TI was evaluated on the basis of the consequences of its uses, such as the
innovativeness and well-being of the firm, or the success and value of some specific projects
(i.e. based on the outcome resulting from the use of the intelligence message). This reflects the
innovation management metrics (Adams et al., 2006).
How to Measure Technology Intelligence?
Besides that, the evaluation of TI was done in relation to a specific context, which could be
either project- or firm-specific. Project-specific metrics are generally non-reusable in the wider
or longer-term context. Firm-specific metrics are not bound to any single project, and are a
means for indicating the effectiveness, efficiency and sustainability of the TI function in the
longer-term. Most of our interviewees conducted intelligence by projects (Adams et al., 2006),
whilst some (e.g. C1) focused on the holistic contribution of TI to the organisation (Chenhall &
Langfield-Smith, 2007).
According to the observations above, the metrics used in practice could be placed on a
framework matrix (see Figure 3), where each quadrant is described below.
5.1 TI Evaluation Matrix
[Insert Figure 3 about here]
5.1.1 Activity-based & Project-specific
These metrics are the easiest to measure and quantify. This is the area least affected by personal
biases. Metrics such as “the number of leads gained” and “the number of papers/patents
reviewed” are easily quantifiable and bear a certain level of objectivity. Nevertheless, the
challenge here is that the level of activity does not necessarily lead to the quality of the TI
outcomes (C3 and C4). These metrics can be useful for TI activities that take a long time, and
where there is the risk of providing interim and partial results of an investigation, which might
bias the decision-makers, as they could unduly provide a point of anchoring (Mortara, 2015).
5.1.2 Activity-based & Firm-specific
The metrics here are used to monitor the level of progress and activity and the health of the TI
system and processes. Both “geographical coverage” and “network gained” are popular means
of measuring the health of the company’s TI function. Nevertheless, interviewees mentioned
indicative measures that they did not routinely capture and quantify. According to Neely et al.,
informal, verbal and perceived measures are the foundations for designing performance
measures that fit the requirements of the organisation (Neely et al., 2005). Hence, these metrics
could be fundamental to evaluate the health of the TI activity and system within the firm, and
implicit or informal measures can be used to check position, communicate position, confirm
priorities or to compel progress – based on Neely’s four “CP’s” of measurement (Neely, 2006).
5.1.3 Outcome-based & Project-specific
Y. W. Loh, L. Mortara
Metrics in this quadrant are usually employed after a project is completed, and are used to
evaluate its results. Recent research (Salter & Ter Wal, 2014) indicates that, with the increase
of external searches, the number of ideas taken up in specific projects rises up to a point, after
which the costs of managing too many inputs overcomes the advantages of receiving them.
However, the results of a TI project could be influenced by many factors beyond the validity of
the intelligence received (such as budget constraints or lack of supporting infrastructure to
develop the project (C4 and C9)), and hence these measures can be inadequate, if used in
isolation, to determine the performance of TI.
5.1.4 Outcome-based & Firm-specific
This quadrant is important for measuring the effectiveness of TI for a company in the longer
term. Ultimately, indicators in this quadrant are what the companies want to achieve with their
technology strategy. However, very few companies in our sample explicitly measured TI in this
quadrant. Managers usually get a “sense” or “feel” about how well the TI is delivering value
(C8), and the criteria they used may be only implicitly based in this quadrant. Measuring in this
quadrant could be opinion-based, subjective, and down to interpretation (C2, C3, C5, C7, C8,
C9, FG). Also, losing control of the short-term benefit of the TI activity in the day-to-day routine
can ultimately undermine its importance in the eyes of top management (C4). In TI, it is often
hard to determine the results (C3, C4, C9), because it takes too long for things to go through
the pipeline and there are so many other factors that are going to influence what happens to it
(C4). For this type of evaluation, data needs to be collected over a long period (it is very difficult
for practitioners to demonstrate and correlate TI performance with the wealth of an organisation,
as it takes a long time for any TI message to underpin any company wealth) and people and
processes could have changed (Mortara et al., 2009). Some researchers propose that search
activities are carried out only by successful organisations, which have the availability of
financial slack (O’Brien, 2003), and hence measuring TI success by linking it with the
company’s financial success might be a tautology.
5.2 Theoretical and practical implications
The research gap we are addressing is the lack of empirical research into how organisations
actually evaluate TI. Both Enkel et al. and Durst & Ståhle listed “use of explicit performance
measure” as a criterion of a mature Open Innovation (OI) system (Enkel et al., 2011; Durst &
Ståhle, 2013). This should also apply to other routines underpinning OI, such as TI. We believe
that the framework we have presented here contributes significantly to TI theory by collecting
and structuring the fragmented data of TI metrics currently used in practice, and comparing it
with the understanding found in various streams of literature. For practitioners, the advantages
How to Measure Technology Intelligence?
of having a matrix of this sort are to contextualise their performance, to encourage practitioners
to adopt more than one metric, and to draw attention to the limitations of some of the current
evaluation practices.
Many works can be found in the literature that support the structure of the framework derived
here, in particular in activity/process vs outcome metrics. Drawing references from the
organisational culture and innovation literature, Ouchi’s model supported the TI Evaluation
Matrix by distinguishing behaviour (or activity-based) and output (or outcome-based)
measurement (Ouchi, 1979). Like others (Wheaton & Chido, 2007; Neely et al., 2005), we
showed that intelligence , like any other routine, could be done from a process or a product
point-of-view.
However, the strength of the TI Evaluation Matrix is that it adds an extra dimension to the
contextualisation of the assessment (the short-term/project focus vs the long-term/firm focus).
This allows an organisation to plot its measures, and to identify the need to adjust measurement
focus (Kennerley & Neely, 2004), which is particularly important as the metrics have such a
subjective meaning, as demonstrated above. The framework reflects the innovation
management literature, suggesting that evaluating an innovation or technology management
activity needs to be done across the processes of knowledge and innovation strategy – as
indicated by Adams et al. (2006). The TI Evaluation Matrix shows links to the concept of
effectiveness (outcome) and efficiency (activity) of foresight systems (Battistella, 2014).
However, in contrast with Battistella, we showed that the measures listed by industry fall into
a more complex set of categories. The evidence we collected shows that the evaluation of TI is
subjected to the purpose and the role that TI serves within the organisation. Some interviewees
had more strategic intents, whilst others, in particular the OI managers, focused primarily on
identifying leads to fit with current innovation pipelines, showing that OI is often carried out
for exploitative reasons (March, 1991). This makes the timeframe for the evaluation different
(long-term impact and evaluation for the first, and short-term focused on projects for the latter).
TI metrics need to be relevant to the stakeholders directly involved in the TI process, such as
technology scouts and their respective intelligence consumers, whereas holistic innovation
measures are more of interest to top level management and shareholders (Dewangan & Godse,
2014). As expected, TI practitioners who are mainly focused on technology scouting (C3, C5
and C6) use activity metrics (e.g. “number of leads” and “number of papers reviewed”), while
those involved in strategic decision-making (C1, C2, C9) are mainly concerned with the
outcome of the TI process, as well as the longer-term firm-specific measures (e.g. contribution
to revenue). C3 and C5 could be using project-specific activity metrics more frequently, because
Y. W. Loh, L. Mortara
they operate in a hierarchical culture with clear reporting structures (Nobel & Birkinshaw,
1998). The project-specific activity metrics would therefore be more relevant to them in order
to quantify the work and to report their progress to managers.
6. Conclusions
This work has developed a framework for the evaluation of Technology Intelligence (TI) based
on empirical data. The TI Evaluation Matrix, proposed here, can particularly help practitioners
structure their measuring and evaluating strategy in TI. As most interviewees expressed the
need for an organised procedure to assess their TI function, this framework serves as a tool for
them to map out the metrics used, and to understand the challenges and implications within each
category of metrics.
Consistent with past research on metrics, the vertical axis of the TI Evaluation Matrix
distinguishes between activity and impact measures. The horizontal axis of the matrix emerges
from the case study interviews, and the focus group study, indicating that a separation between
project (short-term) and non-project (long-term) assessment is required. Different limitations
for measuring TI exist within each category. The analysis shows that, with the exception of TI
managers concerned with explicit sources of information (such as patents or journals), in
practice TI is seldom measured at the level of the TI message. This contrasts with what is
advocated by the information quality literature, and by some TI researchers (e.g. Kerr et al.,
2006).
The four categories of metrics are not mutually exclusive and are independent from each other.
It could be argued that a higher level of activity within the TI cycle can contribute to a better
outcome and more value to the decision-maker. On the other hand, we should be aware that
quality of method does not ensure success, which is true in both foresight and TI (Georghiou &
Keenan, 2006; Wheaton & Chido, 2007). The demonstration of the long-term benefits of having
a TI system seems to be left to researchers and industry is currently weaker in this evaluation.
Although steps were taken to ensure that this research is reliable and accurate, there are
limitations to this study. We are aware of the risks in generalising a limited number of case
studies. Furthermore, since data collection was carried out in the UK, our results could be biased
towards a Western perspective of organisational culture, TI and performance measures (Lok &
Crawford, 2004).
How to Measure Technology Intelligence?
The case studies were based on relatively large global organisations, with employee numbers
ranging from 800 to 274,000. Therefore, it is not surprising that the interviewees felt there is a
high need for TI performance measures because large organisations often seek to standardise
and quantify their processes. This situation might be different for SMEs.
Future research should aim to:
Strengthen the model by gathering input from both the decision-maker and the
intelligence provider. The model could be studied in the context of organisational
culture, for instance appreciative or regulative culture (Vickers, 1963).
Deepen the investigation into how information quality metrics could be better
understood and used for TI evaluation. This could improve the understanding of where
TI systems work well, independently of the rest of the firms’ absorptive capacity.
Investigate whether there is any relationship between the level of maturity of TI in a
firm and how their TI evaluation maps out in the framework.
Adopt an action research methodology, by applying the framework in practice, and
learning from successive rounds of its application (Coughlan & Coghlan, 2002). We
believe this methodology is suitable to refine the current framework because it “brings
together action and reflection, theory and practice, in participation with others, in the
pursuit of practical solutions to issues of […] concern to people” (Reason & Bradbury,
2001, p.1).
Acknowledgements
The authors would like to thank all the companies and individuals who contributed to this
research. Special thanks should go to the Strategic Technology Innovation Management (STIM)
Consortium for providing a platform to conduct this research. This work was done as part of a
Masters dissertation for MPhil in Industrial Systems, Manufacture and Management (ISMM) at
the University of Cambridge, and we are very grateful to those who offered help throughout the
duration of the course.
Appendix
[Insert Table 3 about here]
Y. W. Loh, L. Mortara
References
Adams, R., Bessant, J. and Phelps, R. (2006) „Innovation management measurement: a
review‟, International Journal of Management Reviews, Vol. 8, No. 1, pp.21–47.
Battistella, C. (2014) „The organisation of corporate foresight: a multiple case study in the
telecommunication industry‟, Technological Forecasting and Social Change, Vol. 87,
pp.60–79.
Berg, B.L. (2001) Qualitative Research Methods for the Social Sciences, 4th ed., Allyn &
Bacon, Boston.
Bourne, M. and Neely, A. (2003) „Implementing performance measurement systems: a
literature review‟, International Journal of Business Performance Management, Vol. 5,
No. 1, pp.1–24.
Büschgens, T., Bausch, A. and Balkin, D.B. (2013) „Organizational culture and innovation: a
meta analytic review‟, Journal of Product Innovation Management, Vol. 30, No. 4,
pp.763–781.
Chenhall, R.H. and Langfield-Smith, K. (2007) „Multiple perspectives of performance
measures‟, European Management Journal, Vol. 25, No. 4, pp.266–282.
Chesbrough, H. (2004) „Managing open innovation‟, Research Technology Management,
Vol. 47, No. 1, pp.23–26.
Chesbrough, H. (2006) Open Business Models: How to thrive in the Open Innovation
Landscape, Harvard Business School Press, Boston, MA.
Chiesa, V., Coughlan, P. and Voss, C.A. (1996) „Development of a technical innovation
audit‟, Journal of Product Innovation Management, Vol. 13, No. 2, pp.105–136.
Chiu, L.F. (2003) „Transformational potential of focus group practice in participatory action
research‟, Action Research, Vol. 1, No. 2, pp.165–183.
Coughlan, P. and Coghlan, D. (2002) „Action research for operations management‟,
International Journal of Operations & Production Management, Vol. 22, No. 2, pp.220–
240.
Coyne, I.T. (1997) „Sampling in qualitative research. Purposeful and theoretical sampling;
merging or clear boundaries?‟, Journal of advanced nursing, Vol. 26, No. 3, pp.623–630.
Davison, L. (2001) „Measuring competitive intelligence effectiveness: insights from the
advertising industry‟, Competitive Intelligence Review, Vol. 12, No. 4, pp.25–38.
Dewangan, V. and Godse, M. (2014) „Towards a holistic enterprise innovation performance
How to Measure Technology Intelligence?
measurement system‟, Technovation, Vol. 34, No. 9, pp.536–545.
Dinter, B. (2013) „Success factors for information logistics strategy – an empirical
investigation‟, Decision Support Systems, Vol. 54, No. 3, pp.1207–1218.
Durst, S. and Ståhle, P. (2013) „Success factors of open innovation – a literature review‟,
International Journal of Business Research and Management (IJBRM), Vol. 4, No. 4,
pp.111–131.
Eisenhardt, K.M. (1989) „Building theories from case study research‟, Academy of
Management Review, Vol. 14, No. 4, pp.532–550.
Eisenhardt, K.M. and Graebner, M.E. (2007) „Theory building from cases : opportunities and
challenges‟, Academy of Management Journal, Vol. 50, No. 1, pp.25–32.
Enkel, E., Bell, J. and Hogenkamp, H. (2011) „Open innovation maturity framework‟,
International Journal of Innovation Management, Vol. 15, pp.1161–1189.
Flapper, S.D.P., Fortuin, L. and Stoop, P.P.M. (1996) „Towards consistent performance
management systems‟, International Journal of Operations & Production Management,
Vol. 16, No. 7, pp.27–37.
Georghiou, L. and Keenan, M. (2006) „Evaluation of national foresight activities: assessing
rationale, process and impact‟, Technological Forecasting & Social Change, Vol. 73, No.
7, pp.761–777.
Glaser, B.G. (1978) Theoretical Sensitivity: Advances in the Methodology of Grounded
Theory, Sociology Press, Mill Valley, California.
Grieves, J. (2010) Organizational Change: Themes and Issues, Oxford University Press, New
York.
Heidari, F. and Loucopoulos, P. (2014) „Quality evaluation framework (QEF): modeling and
evaluating quality of business processes‟, International Journal of Accounting
Information Systems, Vol. 15, pp.193–223.
Huizingh, E.K.R.E. (2011) „Open innovation: State of the art and future perspectives‟,
Technovation, Vol. 31, No. 1, pp.2–9.
Jeppesen, L.B. and Lakhani, K.R. (2010) „Marginality and problem-solving effectiveness in
broadcast search‟, Organization Science, Vol. 21, No. 5, pp.1016–1033.
Johnson, R.B. and Onwuegbuzie, A.J. (2004) „Mixed methods research: a research paradigm
whose time has come‟, Educational Researcher, Vol. 33, No. 7, pp.14–26.
Kaplan, S. and Tripsas, M. (2008) „Thinking about technology: applying a cognitive lens to
Y. W. Loh, L. Mortara
technical change‟, Research Policy, Vol. 37, No. 5, pp.790–805.
Keegan, D.P., Eiler, R.G. and Charles, R.J. (1989) „Are your performance measures
obsolete?‟, Management Accounting, Vol. 70, No. 12, pp.45–50.
Kennerley, M. and Neely, A. (2004) „Performance measurement frameworks – a review‟,
Business Performance Measurement, Cambridge University Press, Cambridge, pp.145–
155.
Kerr, C.I.V., Mortara, L., Phaal, R. and Probert, D.R. (2006) „A conceptual model for
technology intelligence‟, International Journal of Technology Intelligence and Planning,
Vol. 2, No. 1, p.73.
Lee, S. and Mortara, L. (2012) „Analysis of document-mining techniques and tools for
technology intelligence: Discovering knowledge from technical documents‟,
International Journal of Technology Management, Vol. 60, Nos. 1–2, pp.130–156.
Lee, Y.W., Strong, D.M., Kahn, B.K. and Wang, R.Y. (2002) „AIMQ: a methodology for
information quality assessment‟, Information & Management, Vol. 40, No. 2, pp.133–
146.
Lichtenthaler, E. (2004a) „Technological change and the technology intelligence process: a
case study‟, Journal of Engineering and Technology Management, Vol. 21, No. 4,
pp.331–348.
Lichtenthaler, E. (2004b) „Technology Intelligence processes in leading European and North
American multinationals‟, R&D Management, Vol. 34, No. 2, pp.121–135.
Lok, P. and Crawford, J. (2004) „The effect of organisational culture and leadership style on
job satisfaction and organisational commitment: a cross-national comparison‟, Journal of
Management Development, Vol. 23, No. 4, pp.321–338.
Lönnqvist, A. and Pirttimäki, V. (2006) „The measurement of business intelligence‟,
Information Systems Management, Vol. 23, No. 1, pp.32–40.
March, J.G. (1991) „Exploration and exploitation in organizational learning‟, Institute for
Operations Research and the Management Science, Vol. 2, No. 1, pp.71–87. How to
measure technology intelligence? 21
Melnyk, S.A., Bititci, U., Platts, K., Tobias, J. and Andersen, B. (2014) „Is performance
measurement and management fit for the future?‟, Management Accounting Research,
Vol. 25, No. 2, pp.173–186.
Melnyk, S.A., Stewart, D.M. and Swink, M. (2004) „Metrics and performance measurement
How to Measure Technology Intelligence?
in operations management: dealing with the metrics maze‟, Journal of Operations
Management, Vol. 22, No. 3, pp.209–218.
Miles, M.B. and Huberman, A.M. (1994) Qualitative Data Analysis: An Expanded
Sourcebook, Sage Publications, Thousand Oaks, CA.
Mortara, L. (2015) Communicating intelligence, U. o. C. Institute for Manufacturing,
Cambridge.
Mortara, L., Kerr, C.I.V., Phaal, R. and Probert, D.R. (2009) „Technology intelligence
practice in UK technology-based companies‟, International Journal of Technology
Management, Vol. 48, No. 1, p.115.
Mortara, L., Thomson, R., Moore, C., Armara, K., Kerr, C.I.V., Phaal, R. and Probert, D.R.
(2010)
„Developing a technology intelligence strategy at Kodak European research: scan & target‟,
Research – Technology Management, Vol. 53, No. 4, pp.27–38.
Mortara, L., Ford, S.J. and Jaeger, M. (2013) „Idea Competitions under scrutiny: acquisition,
intelligence or public relations mechanism?‟, Technological Forecasting and Social
Change, Vol. 80, No. 8, pp.1563–1578.
Naumann, F. and Rolker, C. (n.d.) Assessment Methods for Information Quality Criteria
[Online], Available from:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.38.6523&rep=rep1 &type=pdf
(Accessed 22 June, 2014).
Neely, A. (2002) Business Performance Measurement: Theory and Practice, Cambridge
University Press, Cambridge, UK.
Neely, A. (2006) Measuring Business Performance – Why, What and How, The Economist
and Profile Books Limited, London.
Neely, A., Richards, H., Mills, J., Platts, K. and Bourne, M. (1997) „Designing performance
measures: a structured approach‟, International Journal of Operations & Production
Management, Vol. 17, No. 11, pp.1131–1152.
Neely, A., Mills, J., Platts, K. and Kennerley, M. (2000) „Performance measurement system
design: developing and testing a process-based approach‟, International Journal of
Operations & Production Management, Vol. 20, No. 10, pp.1119–1145.
Neely, A., Gregory, M. and Platts, K. (2005) „Performance measurement system design: a
literature review and research agenda‟, International Journal of Operations & Production
Y. W. Loh, L. Mortara
Management, Vol. 25, No. 12, pp.1228–1263.
Negash, S. and Gray, P. (2008) „Business intelligence‟, International Handbooks Information
System, Springer Berlin Heidelberg, Berlin, Heidelberg, pp.175–193.
Nobel, R. and Birkinshaw, J. (1998) „Innovation in multinational corporations: control and
communication patterns in international R&D operations‟, Strategic Management
Journal, Vol. 19, No. 5, pp.479–496.
O‟Brien, J.P. (2003) „The capital structure implications of pursuing a strategy of innovation‟,
Strategic Management Journal, Vol. 24, No. 5, pp.415–431.
Ouchi, W.G. (1979) „A conceptual framework for the design of organisational control
mechanisms‟, Management Science, Vol. 25, No. 9, pp.833–848.
Patton, M.Q. (1980) Qualitative Evaluation Methods, Sage Publications, Newbury Park, CA.
Prescott, J. (1999) „The evolution of competitive intelligence‟, Proposal Management, Vol. 6,
pp.71–90.
Reason, P. and Bradbury, H. (2001) Handbook of Action Research: Participative Inquiry and
Practice, Sage Publications Ltd., London.
Reger, G. (2001) „Technology foresight in companies: from an indicator to a network and
process perspective‟, Technology Analysis & Strategic Management, Vol. 13, No. 4,
pp.533–553.
Rogo, F., Cricelli, L. and Grimaldi, M. (2014) „Assessing the performance of open innovation
practices: a case study of a community of innovation‟, Technology in Society, Vol. 38,
pp.60–80.
Rohrbeck, R. (2007) „Technology scouting a case study on the Deutsche Telekom
Laboratories‟, ISPIM-Asia, 9–12 January, 2007, New Delhi, India, pp.1–14.
Rohrbeck, R. (2010) „Harnessing a network of experts for competitive advantage: technology
scouting in the ICT industry‟, R & D Management, Vol. 40, No. 2, pp.169–180.
Salter, A. and Ter Wal, A.L.J. (2014) „Open for ideation: individual-level openness and idea
generation in R and D‟, Journal of Product Innovation Management, Vol. 32, No. 4,
pp.1–17.
Simons, R. (1990) „The role of management control systems in creating competitive
advantage: new perspectives‟, Accounting, Organizations and Society, Vol. 15, No. 112,
pp.127–143.
Stewart, D.W. and Shamdasani, P.M. (1990) Focus groups: Theory and practice, Sage,
How to Measure Technology Intelligence?
Newbury Park, CA.
Stvilia, B., Gasser, L., Twidale, M. and Smith, L. (2007) „A framework for information
quality assessment‟, Journal of the American Society for Information Science and
Technology, Vol. 58, No. 12, pp.1720–1733.
Tracy, S.J. (2013) Qualitative Research Methods: Collecting Evidence, Wiley-Blackwell,
Chichester.
Van De Ven, A.H. (2007) Engaged Scholarship – A Guide for Organizational and Social
Research,Oxford University Press, Oxford.
Vickers, G. (1963) „Appreciative behaviour‟, Acta Psychologica, Vol. 21, pp.274–293.
Wang, R.Y. (1998) „A product perspective on total data quality management‟,
Communications of the ACM, Vol. 41, No. 2, pp.58–65.
West, C. (2001) Competitive Intelligence, Palgrave Macmillan.
Wheaton, K.J. and Chido, D. (2007) „Evaluating intelligence‟, Competitive Intelligence
Magazine, Vol. 10, No. 5, pp.19–23.
Woodall, P., Borek, A. and Parlikad, A.K. (2013) „Data quality assessment: the hybrid
approach‟, Information & Management, Vol. 50, No. 7, pp.369–382.
Wynen, J., Verhoest, K., van Thiel, S. and Ongaro, E. (2014) „Innovation-oriented culture in
the public sector: Do managerial autonomy and result control lead to innovation?‟,
Public Management Review, Vol. 16, No. 1, pp.45–66.
Yin, R.K. (2006) „Mixed methods research: Are the methods genuinely integrated or merely
parallel?‟, Research in the Schools, Vol. 13, No. 1, pp.41–47.
Y. W. Loh, L. Mortara
Figures and Tables:
Figure 1: Different points of evaluation of technology intelligence
How to Measure Technology Intelligence?
Figure 2: Information quality metrics results
0123456
Credibility
Relevance
Understandability
Accessibility
Interpretability
Objectivity
Reliability
Accuracy
Uniqueness
Informativeness
Clarity
Importance
Verifiability
Timeliness
Granularity
Completeness
Volatility
Semantic integrity
Frequency of the dimensions being selected
Y. W. Loh, L. Mortara
Figure 3: TI evaluation matrix
How to Measure Technology Intelligence?
Case Sector/company Total
employees
Interviewee’s role Department
1 Fast-moving
consumer goods
174,000 Open innovation
manager
R&D
2 Oil and energy 83,900 Strategy advisor Group technology
3 Home appliances
manufacturer
23,400 Team leader –
Structural design &
prototype
Structural design
department
4 Technology company
1
8,800 Head of innovation
(previous)
European research
group
5 Pharmaceutical 99,000 Seeker Advanced
manufacturing
technology R&D
6 Defence department NA Staff officer Planning branch
7 Research institute 800 European funding
coordinator
Joining technologies
department
8 Technology company
2
4,000 External research
programme manager
Research, design &
development
9 Fast-moving
consumer goods
274,000 Technology
innovation manager
R&D
10 Technology company
2
800 Software system
manager
Library
11 Trade and investment
agency
2,300 Technology specialist Information services
12 Aerospace company 54,100 Assembly commodity
strategy lead
Capability acquisition
Table 1: Description of case study interviewees and their role
Y. W. Loh, L. Mortara
IQ Dimensions Cases Comments
Credibility
1 “Certain academic circles will lower the credibility of information
you receive because they are not coming from people with a scientific
background.”
3 “Credibility is important in the beginning, to make people listen to
you.”
11 “We look for track records.”
FG “Credibility of the TI messenger comes from his/her experience in
the market or technical area.”
Relevance 7 “It is [about] how relevant it is to the project, the goals of the project
or application or decision, and is usually important but not well done
at this point in time.”
Understandability
2 “[I provide] packaged information to executives. They don’t have
much time, so information needs to be simple and easy to
understand.”
Accessibility
1 “Accessibility is important from the point of view of IP rights,
licences or purchases, [whether or not we have access to the
information and the technology].”
Objectivity 7 “Objectivity [is especially important for] information that is usually
questioned and examined.”
Accuracy
1 “What you would always look for in a lead, first of all in terms of
intrinsic qualities is accuracy, accurate in the sense that it does fit
and represents the answer we are looking for.”
Uniqueness
5 “So if we are after a really disruptive technology opportunity, then
uniqueness is going to be of relevance, because that uniqueness is
going to give us maybe something others don’t do or didn’t guess.”
How to Measure Technology Intelligence?
8 “If ‘unique’ refers to a technology then that’s not important. But if
‘uniqueness’ refers to a way of applying it, then maybe that’s a bit
more important.”
Clarity 1 “I don’t know how different clarity is from understandability.”
Verifiability
7 “Verifiability is very hard. [With TI] you are talking about future
research projects that are identifying future needs or trying to solve
future problems using different paradigms.”
3 “If you tell the senior management something and he asks ‘what’ or
‘why’, we should have proofs for that, to make them trust you.”
Granularity
2 “When dealing with strategy, we have to accept that it will not have
the granularity it will have in operations. We are not dealing with the
decimal points, or else it will not be called strategy.”
Table 2: Comments from interviewees about IQ metrics
I
nt. J. Technology Intelligence and Planning, Vol. x, No. x, xxxx 34
Copyright © 2016 Inderscience Enterprises Ltd.
Company Interviewee Role/characteristics Modes of TI Measures used Metrics-type
Activity Outcome Project-
specific
Firm-
wide
metrics
1 Open innovation
manager
Work across functions to support different
business functions, in charge of technology
scouting, facilitate technology acquisition
and create potential for disruptive innovation.
Target, scan Number of leads
Number of leads incorporated
into business
Rate of lead impact
Launches with OI component
Impact on turnover
2 Strategy advisor Long-term strategy planning, scan for game-
changing technology, technology
foresighting and landscaping from an
industry point of view.
Scan Financial benefits of the
projects
3 Team leader –
Structural design
& prototype
Technology research, roadmap and
acquisition.
Target Technical specification target
met
Number of papers reviewed
Networks gained
Number of ideas generated
Number of patents reviewed
4 Head of innovation
(previous)
Technology scouting, scanning, landscaping
in a specific geographical focus.
Target, scan Geographical coverage
Number of leads assessed
How to Measure Technology Intelligence?
Number of leads that turned
into collaborations
5 Seeker Technology scouting for science and
technology platforms, cross-industry learning
and internal intelligence gathering.
Scan, trawl,
mine
Number of papers and journals
read
Networks gained
Number of patents reviewed
Level of technology absorption
Number of leads
Level of impact (penetration
within organisation)
Geographical spread
6 Staff officer Acquire technology with current partners
based on internal needs.
Target Number of internal papers
published
Breadth of external networks
consulted
Geographical spread
7 European funding
coordinator
Work across sections to identify technology
development activities, prepare proposals for
funding and innovation management.
Target, mine Number of leads
8 External research
programme
manager
Coordinate university research, control
budget and contracts. Projects focus on
fundamental research.
Scan Do not actively monitor (Get to
know the progress via review
meetings and updates)
9 Lead breakthrough technology research in
process equipment and develop new process
Target, trawl,
mine
Percentage of turnover as a
result of R&D innovation
Y. W. Loh, L. Mortara
Technology
innovation
manager
concepts at the front end of technological
innovation.
Number of top innovations in a
country that came from the
company
10 Software system
manager
Act as innomediary, monitor information that
comes from publications or developments in
industry, enter new scholarly materials into
company database.
Target, trawl,
mine
Number of entries in database
How long does it take for an
article from being published to
enter the database
Impact on project funding
Number of journals reviewed
11 Technology
specialist
Connect to technology startups to be brought
in based on their value, develop technology
trends and awareness.
Scan Number of leads
Number of leads that turned
into collaborations
Value of projects brought in
Success rate of incorporating
projects
Geographical coverage
12 Assembly
commodity
strategy lead
Generate technology ideas and capabilities
with research centres, universities and
suppliers, also in charge of commodity
strategy.
Target, trawl,
mine
Success rate of incorporating
projects
Technical specification target
met
Level of technology absorption
Table 3: Overview of the key metrics reported by the interviewees
How to Measure Technology Intelligence?
Mortara, L., R. Thomson , C. Moore, K. Armara, C. Kerr, R. Phaal, and D. Probert. 2010. "Developing a Technology
Intelligence Strategy at Kodak European Research: Scan and Target." Research - Technology Management 53
(4):27-38.
... Three research streams have been commonly observed in a large volume of TI studies; (1) industrial case studies that support the understanding of TI concepts (Kerr et al., 2006;Savioz, 2003) or provide managerial implications for decision makers (Lichtenthaler, 2003(Lichtenthaler, , 2007Norling et al., 2000); (2) a data-driven method that captures future changes Li et al., 2019a;Porter, 2005); (3) toolkits (e.g., template, canvas) and the corresponding guides that support TI relevant activities (Kerr and Phaal, 2018;Rohrbeck et al., 2006). Despite valuable contributions of extant studies, methods for measuring TI are still lacking in both academia and industries (Loh and Mortara, 2017;Noh et al., 2016a). Surprisingly, studies discussing TI measurement are hard to find despite its assessment being essential for enhancing organizational competencies (Ebert et al., 2005). ...
... While extant scholars have shown that TI activities are essential for surviving uncertain and unforeseen business environments, studies that measure TI capabilities or evaluate the quality of TI activities are still lacking in both academia and industries (Loh and Mortara, 2017;Noh et al., 2016a). This is surprising as operating TI activities without the proper evaluation and assessment of TI capabilities results in "flying blind" (c.f., Kaufman, 2020). ...
Article
Technology intelligence, which captures future technological opportunities and threats, plays a significant organizational function for firms. While relevant research has existed since the 2000s, there is still a lack of studies that attempt to measure a firm's technology intelligence process. To fill this gap, this study proposes how to measure the organizational capabilities of technology intelligence by applying a maturity model. To develop a model to measure technology intelligence capabilities, extant studies are reviewed, and interviews are conducted with eight firms. This study not only addresses the research gap but also deepens the understanding of technology intelligence by providing actual practices for academia. In practice, these research results can be a useful benchmarking tool for understanding the level of current technology intelligence capabilities and how to improve technology intelligence work processes.
... And the list continues with demonstrated benefits in project duration forecasting (Wauters and Vanhoucke, 2016), supply chain management (Toorajipour et al., 2021) and even in the purchasing process (Allal-Chérif et al., 2020). Other areas can also be considered, like the evaluation and measuring of different IT strategies (Loh and Mortara, 2017) and the development of strategic roadmaps and its implementation supported by project management (Kerr and Phaal, 2017). ...
... Thus, technological aspects, which are agnostic to specific technologies and solutions like data management, data integration, or standardization can be discussed in further development iterations [1], [7], [36]. Additionally, aspects like technology intelligence [45], [46] and digital intelligence [47] might be prosperous dimensions or criteria for the digital transformation maturity model. Furthermore, the customer perspective and the customer centricity approach are recurring aspects regarding digital transformation [3], [18]. ...
... • Applying intelligence knowledge: The intelligence knowledge gained as a result of the TI process is applied to the missions, goals and strategies of the firm (as shown in Table 10). • Analysis and assessment of the quality of TI activities: The quality of TI activities is needed to analyze (quantitatively and/or qualitatively) and assess (activity-based measure and/or outcome-based measure [26]). TI requires resources, competences and capabilities which may possess by not all companies, especially small and newly established firms. ...
Chapter
Full-text available
Technology intelligence (TI) is a process that allows technology-based companies to identify technological opportunities and threats that can affect the future growth and survival of companies. Therefore, the primary operational function of TI is the collection, analysis and dissemination of information for the development of knowledge about threats and technological opportunities. New technology-based firms (NTBFs) should be aware of the latest technological innovations if they want to take advantage of new opportunities and be mindful of potential threats. Since the needs of NTBFs may diverse, there may be different ways to design and implement TI activities depending on the business environment, the level of uncertainty, the strategy used and the resources they can get. The challenge of this study is what NTBF’s capabilities should have to develop and how to implement TI processes for the firm. Therefore, this study considers a better understanding of how to develop and implement TI and its processes in practice for NTBFs in developing countries (Myanmar, China and India). Semi-structured interviews were conducted in this study with 19 technology-based organizations in the fields of Defense Avionics industry, ICT, pharmaceutical and electrical equipment located in Myanmar, China and India. 6 core questionnaires (60 detailed questions) were constructed for the interviews. Analysis and some managerial insights of the study are discussed. Finally, the general TI process cycle for NTBFs in developing countries is conceptualized with seven steps as a cycle for the continuation and integration of TI, and also comprehensively explained how it works.
Article
تسعى معظم شركات الأعمال إلى تحقيق النجاح في عملياتها وأنشطتها وتكافح باستمرار لتحقيق مركز تنافسي متميز، الأمر الذي اقتضى الاهتمام بأساليب وطرق مناسبة للحصول على المعلومات اللازمة للوصول إلى ذكاء تنافسي يتم توظيفها في تحقيق مكانتها في السوق وزيادة حصتها السوقية مقارنة بالشركات المنافسة، والوصول إلى الأداء المرغوب الذي يمكن أن يقود إلى النجاح في خضم المنافسة المحتدمة التي تواجهها، وتنبع أهمية البحث من كونه يتناول الأساليب والطرق التي من شانها ان تحقق للشركة الميزة التنافسية مع المحافظة على جودة المنتجات، واستخدم البحث المنهج الوصفي التحليلي من خلال الأدبيات ذات الصلة بموضوعه، واستخدام استمارة الاستبانة بوصفها الأداة الرئيسة للحصول على البيانات والمعلومات، وحدد اسلوب ليكرت الثلاثي، وكانت عينة البحث مكونة من (40) فرداً من العاملين في الشركة المبحوثة. وتوصل البحث الى مجموعة من الاستنتاجات الموجهة لخدمة الميدان المبحوث من اهمها: اشرت نتائج التحليل الاحصائي الى الاهتمام والالتزام الكبيرين من ادارة الشركة العامة لصناعة الادوية في سامراء بتطبيق ابعاد الذكاء التنافسي.
Article
The mining industry is on the verge of a new era. Mine project planning, mine design and operations, and mine management in relation to the mining of commodities that are demanded across a multitude of industries require new sustainable mining, finance, and organizational structures and strategies. In the new era, the mining industry will be called upon to be the raw material supplier to emerging renewables, electro-mobility, and space sectors. In line with this shift, the breadth and depth of innovation in the mineral industries will also change. Therefore, first and foremost, mining corporations need to develop an innovation strategy. This article examines the ground upon which innovation will be built. First, the structural characteristics of the mining industry that affect innovativeness are analyzed. Then, it is speculated how an innovation strategy and culture can be developed on the basis of these structural characteristics. Finally, the survey results that applied to mining industry professionals about their perceptions and priorities regarding innovation are presented. The survey categorized innovation areas into five categories and three sub-categories and measured whether perception and prioritization of innovation within these categories have kept pace with the direction of the innovation strategy is analyzed.
Conference Paper
Full-text available
Even if it has been amply argued that communication is crucial for the success of intelligence systems, the intelligence literature has so far only marginally touched on the circumstances and details of intelligence delivery, indicatively proposing that two phases (Document and Disseminate) are dedicated to the transfer of the intelligence insight to decision makers. This research, building on the extant knowledge of communication and psychology literature, and the review of real examples of intelligence delivery 'failures', obtains a detailed framework of intelligence delivery which shows the various facets and range of circumstances of insight delivery. As the range of circumstances met in intelligence delivery call for a sophisticated array of delivery approaches, the paper develops an initial set of tactics for intelligence delivery combining the knowledge of practitioners and the findings of the communication and applied psychology research.
Article
Full-text available
This article examines the effect of specific new public management (NPM)-related characteristics to explain innovation-oriented culture within public sector organizations. According to NPM doctrines, an enhanced managerial autonomy combined with result control will stimulate a more innovation-oriented culture in such organizations. Using multi-country survey data of over 200 public sector agencies, we test for the influence of organizational autonomy, result control and their interactions, on innovation-oriented culture. High levels of managerial autonomy and result control have independent and positive effects. However, the interaction between high personnel management autonomy and high result control has a negative effect.
Article
- This paper describes the process of inducting theory using case studies from specifying the research questions to reaching closure. Some features of the process, such as problem definition and construct validation, are similar to hypothesis-testing research. Others, such as within-case analysis and replication logic, are unique to the inductive, case-oriented process. Overall, the process described here is highly iterative and tightly linked to data. This research approach is especially appropriate in new topic areas. The resultant theory is often novel, testable, and empirically valid. Finally, framebreaking insights, the tests of good theory (e.g., parsimony, logical coherence), and convincing grounding in the evidence are the key criteria for evaluating this type of research.
Article
DESIGNING A PROCESS FOR ACTION The purpose of a competitive intelligence (CI) program is to develop action-oriented implications for managers. This is an overview of the evolution of competitive intelligence and of the fundamental concepts of CI, including the intelligence production process. Effective CI is critical in helping the proposal management professional create competitive responses to RFPs and commercial opportunities. Clique Aqui Fonte: Association of Proposal Management Profissional (APMP) . Proposal Management, p. 37-52