Figure 2 - uploaded by Theodore W. Frick
Content may be subject to copyright.
Testing a computer prototype. 

Testing a computer prototype. 

Source publication
Article
Full-text available
Our general strategy of designing, developing, and maintaining over 6,000 webpages on the Indiana University School of Education website was to: •Use an inquiry-based approach to design — user needs assessment, rapid prototyping, and usability testing. •Keep content in XML format, separate from its appearance on the web. •Have web designers at the...

Contexts in source publication

Context 1
... architecture we would have approximately 6,000 hyperlinks on the home page, making it highly impractical to use. With the card sort approach, the number of levels in the hierarchy is kept to a minimum: breadth is greater than depth (cf., Shneiderman & Plaisant, 2004). This means that links to home pages for all major units within the School of Education are just one click away from the home page — and that most web pages are not more than two or three clicks down the hierarchy. For example, if we were to put 15 links on the home page, 20 links on each second-level page, and 20 links on each third-level page, the hierarchical table of contents could point to 6,000 unique web pages. Of course, the card sort and resulting information architecture are based on commonality of user needs, not some predetermined number of links per page. But the example illustrates that if top-level navigation pages are well-organized, a fairly large number of web pages down the hierarchy can be accessed with just a few clicks. An information architecture that results from research can be helpful when justifying the site design to administrators and faculty. The information architecture is not a matter of personal opinion or preference of any one particular person, but is based on empirical data gained through disciplined inquiry. Moreover, administrators and faculty are stakeholders too, and their needs and goals are represented in the information architecture, as well as those of other target audiences such as students, alumni and K-12 professionals. Next, the design team creates a set of paper pages, called a rapid paper prototype, containing a sample of the content and structure that is being proposed for the site (cf., Snyder, 2003). The content structure is based on the information architecture we derived from the needs assessment and the card sort. The labels on the cards become the hyperlink names, and the cards in each stack become the basis for naming hyperlinks the next level down, and the next and so on, as described above. The paper prototype is typically put into a 3-ring notebook. We write numbers or letters next to the hyperlinks (underlined text), and then we create tabbed pages with corresponding numbers or letters on the tabs, so that we can simulate web browsing. We conduct usability tests of the paper prototype by selecting members of the target audience. Usually we need to select only 4-6 members of each appropriate group (cf., Dumas & Redish, 1999; Krug, 2000; Nielsen & Landauer, 1993; Nielsen, 2000). We then observe how these people use the paper prototype to answer frequently asked questions that were identified in the needs assessment. We ask them to think aloud, record the paths they take, determine whether they find the information and find out where they would look for it in the prototype. (A very practical reference on basic principles of web design and usability testing is Krug (2000), Don’t Make Me Think!) Occasionally we alter the prototype — sometimes on the spot — and continue to test it until we have identified major problems with the design of the information architecture. If the problems are severe, we attempt to redesign the paper prototype and conduct another round of usability tests. Otherwise, we fix the problems and incorporate the design solutions in our computer prototype, which is the next phase. We do rapid computer prototyping next and conduct further usability tests. In composing the computer prototype we attend to some web elements (how users will navigate the site, approximate banner graphics, page layouts, occasional images, etc.). At this point, we are not trying to make the design look completely finished, but to get enough of it working on the web so that we can try it with users. In the past five years we have made this rapid prototyping process fairly easy for ourselves by creating approximate design templates and by using our EdWeb tools to build or re-build an existing website to create the prototype of a new or revised one. During the usability evaluation, it is our experience that participants will be more likely to notice problems and comment on them if the website looks like a prototype, not a finished product. Indeed, some intentional typographical errors, occasional missing items and crude graphics in the design prototype can help encourage users to make comments during formative evaluation (cf., Thiagarajan, Semmel & Semmel, 1974). We also seek feedback from other key stakeholders on the computer prototype at this time. Administrators and faculty tend to be more candid about design issues when they can literally see that the website is still a prototype. After modifying the initial computer prototype based on feedback from administrators and faculty, we select a new group of four to six users who are representative of each relevant target audience (cf., Krug, 2000; Nielsen, 2000), and conduct usability tests as described above for paper prototype testing (see Figure 2). At this time users try to find information under more authentic conditions than with a paper prototype. We specifically choose representative conditions, so that users are observed with PCs and Macintosh computers, typical web browsers and both broadband and dial-up connections. Users are asked to think aloud, and we record browsing paths and also use of the search engine. We use these data to identify further problems with the design, including user navigation difficulties and frustration with any web pages that take too long to display. If the problems are still severe or numerous, we will conduct new usability testing with new users after making changes in the computer prototype. When satisfied that we have fixed the big problems with the design — based on our usability findings — then we move on to the final production of the site. At this point we need to pay attention to numerous details for web publishing — i.e., getting final versions of graphics produced so that they look good and load quickly, creating and debugging cascading style sheets (CSS), making sure HTML or XHTML is valid, making each web page look good in terms of layout, checking the use of white space and the inclusion of graphics, and so on. We also need to test our hyperlinks to assure correct linkage. Normally we do these tasks in a web server folder that is hidden from the public web (i.e., nothing links to it) so that the public is unaware of the new site, but we can view it during bug testing. As the production test site nears completion, we ask key School of Education stakeholders (such as our Dean, department chairs, faculty and staff) to preview it. Based on their feedback and comments we make further cosmetic changes. We do not make major substantive changes at this time, since the information architecture and overall design have already been modified based on data from usability evaluations. As mentioned above, the reviewers at this stage know that this design process is inquiry based, and that empirical data are used to make design decisions. In fact, during the final production of the 2004 redesign, our Dean and several faculty members questioned the redundancy of audience hyperlinks on our home page. These links occur in both the horizontal navigation bar below the page banner, and again vertically in the left-most column of hyperlinks in the page body. The Dean suggested that we should consider removing those audience links in the left-most column, since it would make the appearance of the home page “cleaner” and less “busy” (and less redundant). Our response to the Dean was based on usability results. We reported the facts: none of our users during the usability tests actually used the links in the navigation bar across the top of the home page, and these links were seldom used on the second-level pages . Instead, users frequently selected the audience links on the home page in the left-hand column during our usability evaluations. Moreover, during think aloud, it was very clear why. The elaborators that are immediately after each hyperlink were frequently used to decide which category to choose. These were the empirical findings during usability tests. Why, then, did we not remove the links in the navigation bar across the top of the screen? University administrators were encouraging standardization of audience links at the top of web pages as a consistent navigation device throughout IU (e.g., see and ) and the campus web manager, speaking on their behalf, had strongly encouraged us to remain consistent with this design goal. The compromise was to leave the links in both places on the School of Education home page, even though they are redundant. If we had not had empirical data from usability testing, the links in the left column would likely have been removed, and the usability of our home page would have suffered as a consequence. Finally, when we are ready to go “live” and the Dean’s Office has approved the final design, we publish the website for the world to see (using our content management tools described below). If the website is completely new, then we need to add hyperlinks on other existing web pages on the School’s site and also notify other webmasters of the new site. If the site is a revision, usually little external change is required since we make every attempt to keep file names the same after a revision so that we do not break external hyperlinks to the site or parts of it. We also ask one of our university webmasters to make the university search engine index (or re-index) the new site immediately. After that, each page is indexed by the “spider” that follows millions of web links within the 750,000 web pages at our institution — which ordinarily occurs about once a month, and about every two weeks for pages that are updated frequently. Other search engine “spiders” (e.g., Googlebots) will also soon find and index our new site by following hyperlinks as they normally do. Then we ...
Context 2
... modifying the initial computer prototype based on feedback from administrators and faculty, we select a new group of four to six users who are representative of each relevant target audience (cf., Krug, 2000;Nielsen, 2000), and conduct usability tests as described above for paper prototype testing (see Figure 2). ...

Similar publications

Article
Full-text available
The Alpha Magnetic Spectrometer (AMS-02) is a particle physics experiment installed on board the International Space Station (ISS). It has been operating since May 2011 and is expected to continue through 2028 or beyond. The AMS collaboration seeks to store, manage and present its research results as well as details about the detector and operation...

Citations

... Further, the approach may not be well understood by the broader development team (including stakeholders and developers), has different project management requirements than more traditional instructional design approaches, and can place undue burden on subject matter experts. Indeed, Frick et al. (2004) suggest that RP alone may be insufficient for designing products that work well for users, suggesting a need for additional formal or traditional methods of instructional design. ...
Article
Full-text available
The need to prepare students with twenty-first-century skills through STEM-related teaching is strong, especially at the elementary level. However, most teacher education preparation programs do not focus on STEM education. In an attempt to provide an exemplary model of a STEM unit, we used a rapid prototyping approach to transform an inquiry-based unit on moon phases into one that integrated technology in a meaningful manner to develop technological literacy and scientific concepts for pre-service teachers (PSTs). Using qualitative case study methodology, we describe lessons learned related to the development and implementation of a STEM unit in an undergraduate elementary methods course, focusing on the impact the inquiry model had on PSTs’ perceptions of inquiry-based science instruction and how the integration of technology impacted their learning experience. Using field notes and survey data, we uncovered three overarching themes. First, we found that PSTs held absolutist beliefs and had a need for instruction on inquiry-based learning and teaching. Second, we determined that explicit examples of effective and ineffective technology use are needed to help PSTs develop an understanding of meaningful technology integration. Finally, the rapid prototyping approach resulted in a successful modification of the unit, but caused the usability of our digital instructional materials to suffer. Our findings suggest that while inquiry-based STEM units can be implemented in existing programs, creating and testing these prototypes requires significant effort to meet PSTs’ learning needs, and that iterating designs is essential to successful implementation.
... While CSS design can be beautiful, coding it can be ugly. Authoring CSS is sometimes referred to as "programming" [2] and the necessity of CSS debugging is well documented ( [3], [4], [5]). It seems as if the many advantages of CSS have led the community to accept the fact that presentation authors often spend more time on coding decisions than on graphic design. ...
Conference Paper
Full-text available
Authoring CSS is a complex, time consuming task requiring not only skilled human graphic designers but also skilled human coders. Practice shows that today human authored code is still superior to machine generated CSS, but the code characteristics which make the difference have not been researched or even quantified yet. In this paper we introduce the abstractness factor, a quality metric which reveals the advantages of human authored code and can serve as an optimization criterion and benchmark for automated CSS coding. We argue that a high abstractness factor represents a high maintainability and reusability of the presentation document as well as the content document. By an evaluation of 100,000 HTML pages randomly gathered from the Web we show that today's typical style sheet document has a significantly higher abstractness factor compared to code fully machine generated by state-of-the-art applications.
... Under UCDD we place multiple process approaches. These include participatory design (PD) (Bodker et al., 1988), rapid prototyping (RP) (Goodrum et al., 1993;Frick et al., 2005), user-friendly design (Corry et al., 1997;Dumas and Redish, 1993;Norman, 1988;Sugar and Boling, 1995), pluralistic walkthrough (Bias, 1994), contextual design (Beyer and Holtzblatt, 1998;Tessmer and Wedman, 1995), cooperative inquiry (Druin, 1999), situated design (Greenbaum and Kyng, 1991), the user-designer approach (Reigeluth, 1996), ID2 transaction shells , R2D2 model (Willis and Wright, 2000), emancipatory design (Carr-Chellman and Savoy, 2004), and user design (Carr-Chellman, 2007). Although these perspectives are not identical or equivalent, the common thread among them is that in all of them users actively participate to a greater or lesser degree in the design of a system or a product. ...
... Thus, rapid prototyping is appropriate for developing electronic performance support systems (Gery, 1995;Gustafson and Branch, 1997;Gustafson and Reeves, 1990;Law et al., 1995;Witt and Wager, 1994), conference video designs (Appelman et al., 1995), software designs (Dumas and Redish, 1993;Sugar and Boling, 1995), and computerbased instruction (Tripp and Bichelmeyer, 1990). It is also useful in Web design (Boling and Frick, 1997;Corry et al., 1997;Frick et al., 2005) and for collaborative learning (Goodrum et al., 1993;Tessmer, 1994). ...
... Tripp and Bichelmeyer (1990) pointed out further cautions in the use of rapid prototyping, including the need for tools that support building prototypes efficiently, choice of optimal methods for both design and evaluation of prototypes, and-most importantly-knowledgeable and experienced designers. Frick et al. (2005) added important front and back ends to the rapid prototyping process. Their inquirybased, iterative design process was developed and improved through formative research methods and includes needs assessment of the stakeholders, rapid prototyping on paper with usability testing, further rapid prototyping on computers with more usability evaluation, and creating and maintaining the product designed (Reigeluth and Frick, 1999, p. 21). ...
Chapter
Full-text available
This chapter surveys methods, techniques, practices, and challenging issues in user-centered design and development (UCDD). The traditional instructional systems design (ISD) approach has been criticized for its bureaucratic and linear nature and its slow process. Two alternatives to that approach are discussed here: rapid prototyping and participatory design. These have been put forth as alternative models that address the many limitations of the conventional ISD model.
Conference Paper
The present paper reports a learning method in e-learning environment. An increasing number of students in higher education institutions are considering or have started developing e-learning technology. Researchers designed a mentor-based learning model. It is intended to solve the problem of lack of teaching resources and to address the needs of students learning personality.
Article
Many business schools in the United States have experienced a decrease in funding. To compensate for the reduced revenue and remain competitive, a number of these institutions have discovered new and creative ways to raise money, such as using the Internet. This study examined the impact that the Internet has on business school philanthropy and identified online giving trends among randomly selected AACSB International accredited institutions in the United States. A 20-item questionnaire was used to measure the results. Of the 107 business schools that participated in this study, 36.4% (n=39) raised money online. Data also revealed that 66.7% of the business schools that raised money online reported that the average size of an individual online gift was $250 or less, and nearly 80% of the respondents claimed that online donations accounted for 10% or less of the total amount they received in annual donations. This study also explored other variables such as the type of institution (public or private) that accepted online donations as well as the type of fundraising office a business school had (decentralized, centralized, or combined). Donor characteristics and marketing strategies used by business schools to promote their online fundraising programs were also examined. The results revealed that many business schools did not accurately track the demographics and characteristics of their online donors. Findings from this study indicated that advancements in technology have increased the opportunities for obtaining financial support to business schools. The results can be used as a benchmark for future investigations.