First, Veldof introduces the ADDIE method, which is the building process of instructional system design for one-shot workshop in library. According to this method, in order to create effective one-time workshop, instructors, at first, need to investigate the needs of agency (Analysis), design a workshop that provides what agency want to know (Design), develop workable lesson plan and other logistics, such as instructors and learners material (Development), test and revised the workshop (Implementation), and finally try to get evaluation feedback on improving the quality of workshop in the future (Evaluation).
While Yelinek et al (2008) do not mention the ADDIE model, their empirical analyses seem to follow the ADDIE method. In their study, before designing a workshop, they tried to first understand the needs of agency and current online tutorial system. They focused on a workshop for students who enroll in long-distance education program and are not familiar with online tutorial program. By surveying the staff and interviewing students and their parents, they found out that PDF tutorial material is an inefficient tool for students in long-distance education program because it lacks direction for requesting exams, getting feedback on exams, and seeing grades. Thus, instead, they decided to design an online tutorial that consisted of five discrete modules to address the problems students were facing by using Captivate Menubuilder.
The merit of Yelik et al’s study is that careful background check and focus on the needs of agency is indeed an important starting point for creating effective one-shot workshop. In general, students on campus should be able to ask for face-to-face instruction whenever they encounter a problem. PDF tutorial, therefore, should be considered only one of additional tutorial materials. However, students in long-distance education system seldom have a chance to take face-to-face assistance from librarians or faculties. In that case, well-designed online tutorial workshop is a pivotal tool for learning for them.
Yet, I think that Yelinek et al’s study does not address the last two process, Implementation and Evaluation. In creating useful online workshop, needless to say, the workshop is designed in agency-friendly way. But it is equally important that the development of a workshop should be also creator-friendly. As Veldof points out, the development of a workshop takes a great deal of time and money, so once instructors create a workshop, it needs to be reused in the future (Veldof, 2006, p.6). To improve the reusability of workshop, the evaluation and revision of workshop are consistently required. Yelinek reported that MenuBuilder did not allow creators to copy and paste a web address for the SurveyMonkey survey, so creators had to manually type all web addresses. And Menuibuilder automatically created links to all lines of text on the menu, although it was not a hyperlink (Yelinek et al, 2008, p.104). They claim that these technical problems are not a big problem, but there are. In particular, when instructors want to deliver their know-hows to next instructors, when instructors add more information in their workshop materials, and when students are confused about which one is a real hyperlink, the usability of their online tutorial workshop will be diminished. In short, I think that Yelinek et al need to think harder about whether Captivate MenuBuilder is an optimal choice for developing workshop. Related to this problem, the Griffs’ attempt (2009) to find useful screen capturing tools at no cost is worth to note, because, as he claims, it will provides ample opportunity for low-stakes experimentation from library staff in building a dynamic online tutorial for workshop.
Compared to Yelinek’s study, Johnston’s study focuses more on the development and evaluation phases in one-time workshop: The main target of his study is first-year social work students in university. The aim of study is to test how much online tutorial workshop can improve students’ researching skills. Unlike Yelinek’s study, Johnston collected data from a bigger group (i.e. 100 students) and, based on them, argued that online tutorial helps to improve students’ researching abilities. Considering their size of a sample, in my opinion, Johnston’s study seems more accurate, but his testing method seems wrong, so the credibility of his argument is significantly diminished. That is, Johnston does not conduct the test when student did not take workshop, so we will not really know how much receiving one-time workshop actually improved a student’s researching skills compared to a case where the student do not attend a workshop. Instead, he simple concluded that in his survey a majority of students reported that they “felt” that their own searching skills were improved after taking a workshop. However, I do not think that the test of student confidence can be a good measurement of estimating students’ researching skill because students tend to overestimate their online researching ability. (For more details, see Rieh, S. Y. & Hilligoss, B. (2007). College students’ credibility judgments in the information seeking process. In M. Metzger & A. Flanagin (Eds.), Digital media, youth, and credibility (pp. 49-72), MacArthur Foundation Series on Digital Media and Learning. Cambridge, MA: The MIT Press.)
All in all, while assigned articles are short and deal with practical issues, they helped me learn that one-time workshop is not just one of additional handy technical tools but important rescue tools for agencies, such as students in long distance education program.