Ask Harold

 

Oh expert in the crystal ball, please tell all...tell all!

Questions abound in the learning and performance world. Here are a few we have been asked several times recently along with our answers. We welcome additional questions as well as comments and reactions to our responses.

To ask your question, please click on the crystal ball.

To access our Ask Harold Archives, click here.

 

Questions

What is the best software to use to create hierarchical task analyses? Answer

Do you have any information on the effectiveness of short and spaced learning segments compared to longer, more concentrated learning sessions? I have seen a number of articles touting these. Answer

Why don't more findings from learning research make it into everyday teaching/training practice? I am blown away by what I observe instructors and teachers doing that, from my limited exposure to research studies, are unsupported, while research findings, some of them long-standing, fail to influence practice. Answer

As a training-learning practitioner, I find the differences between neurological/neuroscientific research principles and cognitive science research principles to be confusing. Can you untangle these for me? Answer

I have to do a presentation for our Organizational Development team on publicity writing and marketing OD programs. Can you give me some resources to use? Answer

I was wondering if you could point me in the right direction in finding a case study (or studies) that compare the learning results of Instructor-led Training (ILT) versus Web-based Training (WBT)? I am currently working on my Masters thesis in which I am comparing the two, and I am trying to find some numbers that back up my theory that ILT is more effective because of the live human interaction, etc. I remember you mentioning at a conference I attended that well-designed WBT is just as effective as ILT, but I'm still skeptical. Any resources that you could recommend would be greatly appreciated. Answer

A colleague of mine and I are working on the challenges new hires face in organizations. We were wondering when is it worthwhile to hire a star performer? And, the flip side of that, if you are a star performer, should you stay put versus move to other positions and/or organizations? Answer


Answers


What is the best software to use to create hierarchical task analyses?

There may be some new good ones, but my experience with various draw and hierarchical task analysis software has not been great when I want to display a hierarchical task analysis graphically. I go at it the old-fashioned way. I use a basic drawing program and create several sizes of boxes. I then position them at each level, using increasingly smaller sizes. I connect boxes at each level with drawn straight lines. I duplicate boxes from my original set as needed. Using a circle with a letter, in sequence (A, B, C…), along with an output then input arrow to continue on another page I find that I have maximum flexibility. This approach may appear primitive, but it avoids the rigidity of automated conversions from linear to graphic representations of a hierarchical task analysis.

If you simply wish to show the hierarchy linearly, WORD can do it.


Do you have any information on the effectiveness of short and spaced learning segments compared to longer, more concentrated learning sessions? I have seen a number of articles touting these.

Overall, the research evidence has been both stable and overwhelmingly in favor of short learning segments. The technical terms used are "massed" versus "spaced" learning. Below is a very brief summary of the research. It is an excerpt from an upcoming book we are working on, It Ain't Necessarily So: Science Versus Lore in Learning and Performance. I updated it for you. It's slow going. Wait a year.

Masses versus spaced learning. Dempster,[i] in an early review of research evidence on massed versus spaced learning, summed up a great deal of accumulated evidence on the issue, concluding that, "spaced presentations yield substantially better learning than do massed presentations." He was deploring the fact that despite enormous numbers of research studies demonstrating the superiority of spacing out teaching and practice (allowing for assimilation of newly acquired knowledge), massed sessions still predominate in the learning world. His conclusions were echoed by Grove of Ohio Wesleyan University[ii] who found that under massed learning conditions, although these may appear to produce immediate performance, longer term retention and problem-solving capability suffer: "Statistically significant results favoring spaced practice were obtained for both recall of subject-matter and the application of information to solve new kinds of problems." He also found through learner interviews that "data generally showed a preference for spaced practice." Thalheimer presents a very readable summary of the research on "Spacing Learning Events Over Time: What the Research Says."[iii]

Donovan and Radosevich[iv] meta-analyzed the literature on massed and spaced practice and computed "an overall mean weighted effect size of 0.46, indicating that individuals in spaced practice conditions performed significantly higher than those in massed practice conditions." An even more recent meta-analysis by Cepeda, Pashler, Vul, Wixted and Rohrer[v] (see also Pashler, Rohrer, Cepeda and Carpenter[vi]) concluded that "spaced presentations improved final-test performance by 9%, compared with massed presentations." Advantages of spaced, as opposed to massed, practice accrue for individuals of all ages, and serve to make teaching more effective.[vii] Karpicke and Roediger[viii] found similar supporting evidence and concluded that "the important factor for promoting long-term retention is delaying initial retrieval to make it more difficult, as is done in equally spaced retrieval." Finally, a 2013 experimental study by comparing massed versus spaced learning (two separate experiments) in concept learning thoroughly demonstrated the superiority of a spaced approach. In the experimenters' words, "Spacing led to superior interpolation and extrapolation performance, with random massing leading to the worst performance on all test trial types (p1). [ix]"

Hence, while rapidly getting to the right answer in a massed learning session appears to make sense in our time-strapped workplaces, allowing more opportunity to learn and practice, spaced out over time, generally results in stronger, longer performance staying power.

[i] Dempster, F. N. (1989). Spacing effects and their implications for theory and practice. Educational Psychology Review, 1 (4), p. 267.

[ii] Grove, M.G. (1995). The Effect of Masses Versus Spaced Practice on Retention and Problem-solving in High School Physics, Ohio Journal of Science, 95 (3), 243-247.

[iii] Thalheimer, W. (2006). Spacing Learning Events Over Time: What the Research Says. http://www.leerbeleving.nl/wp-content/uploads/2011/11/Spacing_Learning_Over_Time__March2009v1_.pdf.

[iv] Donovan, J.J. and Radosevich, D.J. (1999). A Meta-analytic Review of the Distribution of Practice Effect: Now You See It, Now You Don't. Journal of Applied Psychology, 84(5), 795-805.

[v]Cepeda, N.J., Pashler, H., Vul, E., Wixted, J.T. and Rohrer, D. (2006). Distributed Practice in Verbal Recall Tasks: A Review and Quantitative Synthesis. Psychological Bulletin, 132(3), 354-380.

[vi] Pashler, H., Rohrer, D., Cepeda, N.J. and Carpenter, S.K. (2007). Enhancing Learning and Retarding Forgetting: Choices and Consequences. Psychonomic Bulletin and Review, 14(2), 187-193.

[vii] Seabrook, R., Brown, G.D.A. and Solity, J.E. (2005). Distributed and Massed Practice: From Laboratory to Classroom. Applied Cognitive Psychology, 19, 107-122.

[viii] Karpicke, J.D. and Roediger, H.L, III. (2007). Expanding Retrieval Practice Promotes Short-term Retention, but Equally Spaced Retrieval Enhances Long-term Retention. Journal of Experimental Psychology, 33(4), 704-719.

[ix] McDaniel, M. A., Fadler, C. L., & Pashler, H. (2013, April 8). Effects of Spaced Versus Massed Training in Function Learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. (Advance online publication. doi: 10.1037/a0032184)


Why don't more findings from learning research make it into everyday teaching/training practice? I am blown away by what I observe instructors and teachers doing that, from my limited exposure to research studies, are unsupported, while research findings, some of them long-standing, fail to influence practice.

There are a huge number of reasons. What follows is a list of ten. Many others could easily be added.

  1. Access by educators and corporate L&D departments to credible research findings. Research findings are usually published in research journals which are not readily available to the practitioner world. Practitioners are usually overwhelmed by current workloads and respond most readily to priorities of the moment. There is little immediate reward for searching the research sources.
  2. Research is nuanced. Read an article based on a research study. It is generally narrowly focused and filled with lists of "limitations of this study." The working professional requires credible, generalizable principles to help make decisions about changing training-learning practices. Research findings are rarely clear about how to apply what has been discovered.
  3. Practitioners' own belief systems. Right-brain - left brain, learning styles, neuro linguistic programing, multiple intelligences, etc. are examples of exciting revelations that stir up energy in educators and trainers. Regardless of flimsy or non-existent evidence of efficacy, they create enthusiasms that transform into beliefs that are hard to root out even in the face of research findings that do not support them.
  4. Organizational barriers. Commitments have been made about learning within the organization. The team understands these. Investments in time, resources and energy have already been made. Disruptive research findings are not welcome.
  5. Fads. Koosh balls hurled around the room, outdoor adventure team building, suggestopedia are examples of fads that catch on. They appeal to the imagination and take up space as well as resources for learning sessions. They are generally far more appealing to apply than what may be far more effective practices based on research. Through hype and marketing, they are more centrally placed within the training-learning realm than less glamorous research supported methods such as deliberate practice or application of cognitive strategies.
  6. Resistance to the application of research-based evidence. There is considerable research demonstrating that people do not seek research-based evidence to make decisions. Rather they hunt for research to support decisions already made.
  7. Conflicting research findings. Researchers, being very conservative in how they present their findings, leave open the possibility that slightly differently designed studies may produce different results. This may generate conflicting outcomes being published. Studies on the effects of irradiation on foods has produced different conclusions based on methodological differences in how the studies were conducted. This is especially true in studies on communication, personality and learning styles.
  8. Cost. It may be expensive to change based on new research.
  9. Resistance to change. While we welcome what we view as beneficial change, we have difficulty accepting changes that we find threatening or requiring additional effort.
  10. Inertia. Somewhat related to the previous point, it requires effort to start up in a different direction. Unless there is an overwhelming incentive to shift to a different paradigm based on research, it is so much easier to continue along the same, time-worn path.


As a training-learning practitioner, I find the differences between neurological/neuroscientific research principles and cognitive science research principles to be confusing. Can you untangle these for me?

The differences begin with the fundamental focus of each. Neurology is concerned with the functioning of the brain, nervous system and spinal cord. Cognitive science studies the workings of the mind. One is a medical approach to determine how our senses, nerves and brain deal with information via the chemistry of the human body. The other deals with how our minds encode and interpret the information we receive. Obviously, there can be overlaps between the two when studying learning. The key tools of the neurologist and neuroscientist are those related to neuroimaging, the microscope, chemistry and electronic devices. They observe what occurs inside the body as the brain develops, transmits information, creates connections chemically and manifests these graphically. The cognitive scientist makes observations through experimentation outside of the body.

When it comes to contributions to Learning Science, each complements the other. Principles drawn from neuroscientific investigation and discovery can guide the learning professional toward understanding how the brain functions including the tracking of information flow, the creation of synaptic clusters and how the various brain chemicals, especially neurotransmitters, affect how we react under various stimulus conditions. Cognitive principles can help us comprehend the effects on learning as we vary external conditions affecting the mind.

From a practical perspective, the learning professional should turn to neurology and neuroscience for the most up-to date discoveries on how the brain functions and what takes place, physiologically, during learning. Cognitive science provides models and methods for structuring and delivering learning derived from data-based experimentation. Together they help educators achieve learning success through neuroscientific principles of how the brain functions as they apply cognitively derived principles to develop the mind.


I have to do a presentation for our Organizational Development team on publicity writing and marketing OD programs. Can you give me some resources to use?

Much as I would love to help out, my areas of specialization do not encompass publicity writing and marketing of OD programs. However, if you permit, I will share with you some ideas about marketing and communicating the value of what performance professionals offer to their clients, the people they manage and their organizations.

I begin with an assertion most people in our line of work don't like: we are poor marketers and communicators of our worth to our clients and the organizations we so diligently strive to help achieve results they value. I say this based on almost a half century of wandering through companies, government agencies and even the professional societies of which we are proud members. We talk about what we do, the research in our fields and employ jargons which are generally unfamiliar/unintelligible to those who are not part of our professional cultures. No wonder that we are not heard.

An error we too often make is to listen to our clients who often approach us, not only with their problems, but also with what they believe to be appropriate solutions. They believe that they have already conducted their analyses and know what's wrong as well as what the cures should be. And we, submissively, frequently accept their words and carry out what they have requested. Our justification? They are our "customers," after all, and are, in most cases, of superior rank to us. We then work hard, with not always felicitous results.

So how do we extract ourselves from this?

First, before we write any publicity or engage in marketing, we have to center ourselves firmly. What is our vision and mission of our own work? What is our unique contribution to various levels of recipients of our interventions? What is it that we can uniquely provide that contributes to the successes of our organization's customers, the organization itself, out direct clients, their workers and ourselves? What are our special strengths? What is the menu of services we can currently competently deliver? What should we also be able to deliver and how do we arm ourselves to achieve capability to do so?

There is more, but as we define who we are and what we have to offer, the foundation for being able to communicate this to others becomes more concrete and communicable. However, people have difficulty understanding the abstract. We require strong examples, credible allies and satisfied customers to help our marketing and support our publicity. The maxim in marketing is that "word of mouth" is the strongest and most effective way to attract and/or influence potential customers. Thus, we must clearly define ourselves, our contributions, our services and our value-adds. We must then seek out what I refer to as "compliant clients" - ones with whom we have built positive relationships and with whom we have created a certain amount of trust. Working with them, we can help identify areas for performance improvement or significant desired change and then get them to allow us to do the right things to attain success. Choose small projects that from the outset offer high probability of success. Document what you do. Ensure that recognition of success for the ventures accrue to the clients. Make them feel good about what they have accomplished. Share the glory.

With a few of these cases in hand, hold a show and tell. Offer a "free lunch." Enlist senior managers and opinion leaders to support your efforts and the event. At it, get your satisfied customers to talk about the challenges they faced, what they and you did together and what was accomplished. Use concrete data. Let them be the heroes and you, the hero builders. Write up articles with their and your names on them for internal newsletters or even external ones. At this point, you can distribute materials that succinctly and professionally state who you really are and what benefits you offer to those who call upon your services. Include simple, succinct success stories. Say less, but intrigue enough to stimulate their calling on you.

I have been very brief in my explanations and have left out the how-to's. However, I hope this helps as a thought starter.

There are many books on publicity writing and marketing. However, you are not Coca-Cola. You are professionals with great ability, a desire to serve and a solid track record. Following Socrates' advice, begin by knowing yourselves - thoroughly. Then let others tell your story in words their listeners understand. Soon, many will be beating a path to your door.


I was wondering if you could point me in the right direction in finding a case study (or studies) that compare the learning results of Instructor-led Training (ILT) versus Web-based Training (WBT)? I am currently working on my Masters thesis in which I am comparing the two, and I am trying to find some numbers that back up my theory that ILT is more effective because of the live human interaction, etc. I remember you mentioning at a conference I attended that well-designed WBT is just as effective as ILT, but I'm still skeptical. Any resources that you could recommend would be greatly appreciated.

This is a tough comparison. There are so many variables that go into creating a learning session. The delivery mechanism is not the main differentiator. In a large number of media and delivery system comparison studies, the general conclusion drawn has been that "all other things being equal, there is no significant impact from the medium with respect to learning effectiveness." Richard E. Clark has written extensively on this.

What you are trying to compare is a lesson in ILT with one in WBT. How do you maintain perfectly equal design elements? You can't. Long ago, there were attempts (and occasionally you see more recent ones) to effect comparisons between live classroom instruction and some form of CBT. These went nowhere. Culick and Culick pushed this agenda in the 1980s, attempting to demonstrate superiority of CBT. Robert Kozma at the University of Michigan also tried that CBT was more effective for learning. Richard E. Clark demolished their arguments.

I recommend focusing on specific design aspects in live and web-based instruction to determine if they have similar effects. Examples are use of inference, dynamic versus static examples, some form of questioning and forms of feedback. It's the design of the instruction, not its delivery mode that makes the difference.


A colleague of mine and I are working on the challenges new hires face in organizations. We were wondering when is it worthwhile to hire a star performer? And, the flip side of that, if you are a star performer, should you stay put versus move to other positions and/or organizations?

To begin, being a star performer does not necessarily imply that this person is only limited to stardom in one setting and will not perform well elsewhere. If the high performance is based on intimate relationships with various parts of the organization, developed over a long period of time or a deep familiarity with processes and systems unique to the organization, then transfer of success to a new environment is less likely. If, on the other hand, the person operates mostly as an individual contributor or as a leader of direct reports who are there to carry out support tasks for this person, then the success factor can be more readily transported elsewhere. A high performing rocket design engineer, a researcher scientist with deep, specialized knowledge, a star sales person or hair dresser with his or her own system and personal connections are examples of stars that can be more readily moved from one work environment to another. However, this is not a dichotomy. Rather, it is a continuum. The key factors are the sources of success. The more dependent on the current organization’s environment, systems and circumstances, the less portable the high performance.

The second question: “If you are a star performer should you stay put versus move to other positions and/or organizations?” Obviously, there is a linkage between this question and what I wrote above. What the research suggests for star performers is that they not be blinded by their current success, but rather they should build a broad network of connections and contacts outside of their specific work environment. They should also invest in their interpersonal, organizational and communications skills to increase their ability to transport their successes elsewhere. To this point, Groysberg has found that women appear to be more successful with their star portability than men. He suggests that this may be due to women more frequently maintaining broader relationships and networks. Because high performing women tend to meet more obstacles in the workplace, they also tend to weigh more factors than men do, especially cultural fit, values, and managerial style in making job changes. You might want to go to http://www.bnet.com/2439-13070_23-186751.html for more on this.


ASK HAROLD !

Your full name:
Title:
Company/Organization:

Website address:
Email address:

Questions:


Why HSA?
| How HSA Can Help | Our Approach | Our Results | Our Vision & Values | Our Principals | e-Brochure | Our Learning Solutions
Our Performance Solutions | Our Consulting Services | Our Clients | Our People | Workshops | e-Seminar Brochure | Newsletters | Articles | HSA Lexicon
Book Purchases | Ask Harold | What's New | Events Calendar | Links/Resources | Contact HSA | Publicity | Francais | Home

© 2000 - 2020 Harold D. Stolovitch & Erica J. Keeps

About HSA
Seminars
Contact HSA
Publications
Expert Q & A
Links/Resources
Home
Francais
HSA Solutions
Our Clients
Our People