You were referred to in a webinar I attended as a source in stating that only around 30% of know-how and skills contribute to improvements on performance. This being the case, what are the other factors that make up the other 70% and where may I find this reference? Is it contained in your research or in a publication? Answer

I am conducting a study on the workload of university professors. Besides time-on-task, what other components of the workload should be measured? For example, you and I could spend the same amount of time on a task but your results would certainly be of higher quality than mine. Or, you might spend lesser time on a task than I and, again, your results would be of better quality. Applied to professors, what is the relation between time-on-task and performance? Answer

According to a leader in my organization, “HPT” and “coaching” are the exact same thing. In fact, according to the leader, the two expressions are synonyms and one hundred percent exchangeable. I was under the impression that “coaching” was a trend in Europe (most books I have seen come from France or England) and, although it was about performance improvement, it was not rooted in theory and research, and was only systemic but not systematic. Actually, I thought it was a trend started by a few athletes “importing” their coaching expertise into the business world. Would you be kind enough to give me your impressions on this? Answer

I am a senior in university and want to go into human capital consulting after graduation in May. I was wondering what positions are open for recent graduates. Answer

What basic elements should be included in a "training plan" for a job in which the development of technical and soft skills are required? Answer

What is the difference between a competency and a skill? Answer

In what kind of company would the technical skills of top managers be more important than human relations or conceptual skills? Answer

I'm a training professional working in the pharmaceutical industry and am seeking a basic competency model detailing what performance looks like for classroom trainers. Ideally the model would describe performance at three different levels: basic/fundamental, intermediate and advanced. Can you point me in the right direction? Answer

I'm putting together a pre-test and post-test for one of our training programs. It will be given to some, but not all of the participants to determine if the participant knowledge increased as a result of the training. Since the training we do is not based on clear, measurable objectives, the only test I can create is a knowledge test of the content that is covered. Do you have any information on when a pre- and post-test is effective and when it is not? Answer

Where can I find current research on transfer of training? Who are its key researchers? Answer

How can the use of effective job feedback designs or job aids replace on-the-job training? Under what conditions would formal training be a better solution? Answer

Is there a method to determine if an individual's mindset is activity-based versus results-based? Answer

In which types of companies are technical skills more important than human relations or conceptual skills for their top managers? Answer

I am teaching a class in training and development and am using your books, Telling Ain't Training and Beyond Telling Ain't Training Fieldbook for my graduate students in a Master of Arts program. On page 141 of the Fieldbook, you advocate selecting training topic assignments and then state "Distribute a 10-minute section of the training program to each participant." Where might one find appropriate topic assignments? Could you suggest several topics that have worked for you in the Video Practice Session (VPS) and provide an example. Answer

In the world of Competence Assurance and job profiles, what percentage of total tasks/procedures must a candidate be assessed against to assure competence? Is a representative sample of critical tasks sufficient or should you assess against every task/procedure available? Answer

I would like some advice on paper versus electronic job aids. I'm concerned that paper gets outdated and people still hold onto the old information, even when updates are provided. The electronic format is good, but it's not as convenient so may not be used as much. Please share your thoughts on this subject? Answer

I am in a class on Entrepreneurs and we are required to put a business plan together. I will be putting together a plan for my newly formed performance consulting company and part of the plan requires us to put together our pricing. Do you know where this information might be available? Answer

What is conceptual skill and why is it important to top management? Answer

Is there any research that shows that learning transfer happens more effectively and efficiently when inside trainers are used versus outside consultants? We are implementing a new IT system and our COO thinks it is more effective to use the new systems trainer who will learn our processes and teach the new workflows, versus a T3 approach using trainers from each department who have cultural knowledge and understand the work and the application of the new system in their environment. Answer

When/under what circumstances should I use a RASCI model and how do I use it? Answer

I'm working on a performance improvement project and just finished the cause analysis. The gaps I found have fall mostly under trust issues and open communication. Should I concentrate on the examples and factors of why that trust is not there or should I state that lack of trust is the reason they are having these gaps? Also, when looking at the Behavioral Engineering Model, where would lack of trust fall under? Should it go under information or capacity? Answer

Do any of the consultants you work with who are internal (company) consultants report difficulties in getting closure with their internal clients? We currently have some 19 projects in process but are having difficulty with some when it comes to closure. It seems that other priorities delay meetings, implementation activities, etc. If we were external and continuously invoicing clients for hours worked, it would probably be a different story. Any thoughts or suggestions? Answer

At the ISPI Conference in Chicago this past fall, you stated that learning styles are pretty much discredited now as they only account for 2% of learning effectiveness. You said there was no firm evidence to support them. Can you give more information on any academic literature to back up what you're saying? I agree with you but have colleagues who strongly disagree. Answer

Learning packets are one way in which our employees receive education on new topics. We struggle with finding a way to consistently and realistically determine how long it should them to finished the packet. Sometimes they are doing them at home and then they are paid for the time. So pay becomes a part of the scene and can cause problems when people feel the time estimated to complete the packet is in error. We also offer classes that require study time out side of the classroom. They are also paid for this time. We struggle with determining the correct time for this work as well. Any insights or experiences you have had with determining independent study time? Some staff feel they are not given enough time while others seem to need less. What is the most equitable way to determine these time frames? Answer

What is the difference between efficiency and effectiveness? Answer

I read your article "Keys to Performance Consulting Success." However, it still seems like there's a huge canyon to bridge between doing training-only and performance improvement. You have to acquire the skill set (hire new people) to get the clients for performance improvement. How do you justify acquiring the skill set if you don't have the clients? Answer

What are some standard questions that should always be asked when speaking with a SME to ensure you find out everything you need to know? Answer

What is the difference between ability and skill? Answer

In one of your articles you mentioned "not more than 10 percent of the expenditures actually result in transfer to the job." Does this mean:

  1. Transfer rate of 10% was achieved because proper analysis was conducted (including transfer strategies)?
  2. We should have invested the 90% to improving work environment factors as training was obviously not the ideal solution? Answer

Can HPT be applied in an educational organization? Answer

There is increasing interest from the training, consulting, and HR communities in HPT but there is also worry whether this interest is going to translate into increased practice and opportunities. Please comment. Answer

What is the difference between skill and competency? Answer

When conducting a needs analysis, what are the most important questions to ask?Answer

Do you have any additional information on the topic of transforming performance capability?Answer

I am most interested in supporting my managers in having a discussion with their employees pre and post training. Do you already have a template of what this discussion might look like?Answer

Why is it so hard to apply a few proven best practices in our work environment?Answer

How do accelerated learning and brain-based learning relate to the emotions of fear?Answer

What is the difference between a training needs analysis and a performance analysis?Answer

We frequently hear the term "learning styles" tossed around like "I know there are tests for people to identify their learning style" and so on. People use this to argue for one delivery media over another like "My learning style prevents me from engaging in e-learning!" I would like to know if there are any scientific grounds for the existence and definition of learning styles. If so, what are they and what are the main messages coming out of this literature?Answer

Several years ago I encountered a training philosophy that I liked, but I don't know how to find resources that would teach it. The philosophy is that you let people know up front what specific knowledge and skills they will be taught and expected to know. An outline is made which essentially becomes a written test, but not in the traditional academic format. The test items are written in a verbal, proactive way, such as "Describe ...", or "Relate these items.." or "Demonstrate ..." for those skills that can be observed. So the "test outline" is seen at the beginning, the trainee knows exactly what is to be learned, and that the objective is to complete it with 100%, not a lesser passing grade. The concept is that, if you are training someone in an important task, are they competent at 80% or wouldn't it be better if they were 100% on things you need them to know. Can you identify this philosophy and lead me to resources?Answer

What is the difference between organizational effectiveness and human performance technology (HPT)? Answer

What is the difference between Organizational Development and Human Performance Improvement? Answer

Would you have any advice for new consultants to the field? What skills, aptitudes and attitudes do you think are important to succeed in today's competitive marketplace? Answer

To what extent do we, as instructional designers and human performance technologists, need to immerse ourselves in the contexts of our client's business? In other words, must we acquire or possess in-depth subject knowledge of their field, or is it sufficient to be merely familiar with their content? Answer

What are the best most popular needs assessment (competency-based or performance-based) models? Answer

De plus en plus, on essaie d'homogénéiser les clientèles en formation, donc de regrouper les mêmes niveaux d'habiletés, capacités d'apprentissage au lieu de diversifier la clientèle (apprentissage rapide, régulier ou lent dans des groupes séparés); qu'en pensez-vous ? Réponse

What do you think the future holds for training professionals with respect to Reusable Learning Objects? Answer

How does one objectively measure a candidate's personality in an interview? How important is the personality aspect in a candidate vis a vis his technical skills in today's competitive scenario? Answer

What is meant by a "blended solution"? Must it include a high tech component? Answer

What is the difference between front-end analysis, needs assessment, and needs analysis? What can I say to my management to persuade them to invest in front-end analysis for my projects? Answer

Why don't men listen? Answer

Can you develop interactive, online learning teaching learners to use a software product when the software is being developed concurrently? If yes, how is this done to ensure cost containment and efficient use of resources? Answer

What is the difference between a performance, an accomplishment, a competency and a skill? Answer

Is e-learning all it's cracked up to be? Answer

What is performance consulting? Answer

What is the difference between Human Performance Technology (HPT) and Human Performance Improvement (HPI)? Answer

How do you calculate Return on Investment (ROI) in training or human performance improvement interventions? Answer

Is ISD outdated? Answer


You were referred to in a webinar I attended as a source in stating that only around 30% of know-how and skills contribute to improvements on performance. This being the case, what are the other factors that make up the other 70% and where may I find this reference? Is it contained in your research or in a publication?

The degree to which skills and knowledge variables affect human performance at work strongly varies with the context of a performance situation and the nature of the desired performance. Overall, however, both research and documented professional practice have shown that there are a host of factors affecting workplace performance. I refer you to several books: Thomas Gilbert (1996), Human Competence: Engineering Worthy Performance; Geary Rummler and Alan Brache (1990) Improving Performance: How to Manage the White Space on the Organization Chart; James Pershing (2006), Handbook of Human Performance Technology. All of these volumes contain a great deal of information concerning the array of factors influencing how people perform. What emerges in a consensus fashion is that about 75 - 80 percent of these are of an environmental rather than an individual nature. Here are the ones most frequently cited: lack of specific performance expectations, conflicting expectation priorities; lack of timely and specific feedback with respect to expectations; lack of timely access to required information; task interferences; inadequate tools and resources; unclear or counterproductive policies, processes and procedures; inappropriate or even counterproductive incentives and consequences; poor or inappropriate selection of performers; lack of perceived value to perform; threats in the environment; environmental obstacles (physical; administrative; emotional); language and cultural issues.

There are many more, but these are the ones that are most commonly found. As you examine the literature on human performance technology, you will encounter these and many others.


I am conducting a study on the workload of university professors. Besides time-on-task, what other components of the workload should be measured? For example, you and I could spend the same amount of time on a task but your results would certainly be of higher quality than mine. Or, you might spend lesser time on a task than I and, again, your results would be of better quality. Applied to professors, what is the relation between time-on-task and performance?

You pose a question that has multiple answers. Let me change your question to a more general one and in the process define some terms. Let's start with word "performance." In my world of the workplace, I and most human performance technologists define this critical term as "valued accomplishment derived from costly behavior." [See Thomas Gilbert's book (1996) Human Competence:Engineering Worthy Performance.] Many others such as Geary Rummler, Peter Dean and Harold Stolovitch and Erica Keeps have explored this definition and applied it in a number of workplace instances.

To elaborate, performance is a function of what you do and what you achieve. The doing is the cost portion (effort, money, resources, time). What you achieve, the valued accomplishment, is the benefit or desired result you obtain from the expenditure of the costly behavior. Gilbert proposes his "Leisurley Theorem" which, in brief, says that the best performance is one where we obtain the greatest valued accomplishment with the least effort. This certainly includes time expenditure. The corollary to this theorem is that the more time you require to achieve a valued result, the less "worthy" you are as a performer (worth being the ratio of value to cost and time being a critical cost factor).

Continuing in this vein, in McKinsey's study on the War on Talent (2001), the authors suggest that top performers are 70 percent more productive (achieve more in the same units of time) than average performers. I have been involved in studying exemplary performers in the retail automotive world. What I discovered is that top performers in, for example, automotive sales, are more than twice as productive as the average. The top 20 percent of sales consultants in our study sell 2.1 vehicles to 1 compared to their average colleagues. And this occurs month after month.

In an internal study at the European Patent Office, communicated to me privately, they discovered that their top patent specialists, most with PhDs and/or very highly recognized technical competencies in specialized fields such as medicine, pharmacology, telecommunications and computer sciences, are three times more productive than their average colleagues. That is, they are able to process patent requests and bring them to closure more rapidly with no less quality (e.g. number of appeals, clarity of decision, numbers of communications) than the others.

You have certainly seen the expression, "If you want a job to be well done, give it to the busiest person." In studying appropriate workload, you have to begin by defining accomplishments. In any given field there are "stars." These are top performers who demonstrate what is possible in a given amount of time. They are the models for setting standards. I recommend that you identify these top performers based on sets of generally acceptable performance criteria. Then study what these exemplary persons do and how much time they spend to achieve their valued results. Publications, teaching scores (too frequently neglected), PhDs produced, significant contributions made, research funds obtained, frequency of citations in respected publications, invitations to speak and invitations to consult are some of the success criteria. They can be made specialty dependent.

To conclude, I recommend starting with accomplishments, deriving exemplary standards and then, based on those who achieve the most with the least amount of wasted time, build portraits of appropriate workloads

On a personal note, as an active professor, I was aware of the four requirements for promotion: research and publication; teaching; external notoriety (e.g. leadership positions in my field; invitations to speak and conduct seminars, consulting to professionals in my field; awards); internal contributions (e.g. committee work, administrative work; university leadership positions; mentoring of colleagues). They were fairly clear and, I believe, appropriate. From my perspective they required balancing and equal attention. Find those who do well in all of these and analyze their time expenditures. Also study those who are average and those who do poorly. I believe you will find valuable data for your study.


According to a leader in my organization, “HPT” and “coaching” are the exact same thing. In fact, according to the leader, the two expressions are synonyms and one hundred percent exchangeable. I was under the impression that “coaching” was a trend in Europe (most books I have seen come from France or England) and, although it was about performance improvement, it was not rooted in theory and research, and was only systemic but not systematic. Actually, I thought it was a trend started by a few athletes “importing” their coaching expertise into the business world. Would you be kind enough to give me your impressions on this?

What nonsense! HPT is a field. Its purpose is to engineer systems that result in performance all stakeholders value - performers, management, customers, regulators and the community at large, as appropriate. It is systemic in its vision and approach, scientific in its base, systematic and orderly in its methods, and draws form all relevant sources to achieve optimal, verifiable results desired by all stakeholders.

Coaching is simply one intervention to achieve performance. It's central purpose to is to increase skills and knowledge and to guide those whose performance requires improvement. By-products of coaching is increased accuracy and fluency through task-focused feedback, increased value and interest with respect to the targeted performance and increased confidence in being able to perform at desired levels.

Comparing coaching to HPT is like comparing shoes to fashion. Yes, there is a tangential connection - very tangential. Please refer the professor to The Handbook of Human Performance Technology.


I am a senior in university and want to go into human capital consulting after graduation in May. I was wondering what positions are open for recent graduates.

This is a tricky issue. One rarely finds openings for "human capital consultants" per se. I recommend reviewing the ISPI and ASTD job banks located within their websites. The nature of the jobs (the descriptions) provide cues as to what they are hunting for. Also look for positions related to performance consulting, human resource management and human resource development. Sometimes, you find a position that sort of fits what you are looking for. Once in the job, you can begin to demonstrate the value-add of your perspective.

Another route is to contact performance consulting companies and offer to do an internship there. Then, show your stuff. If you're great, they will want to hire you. McKinsey, Accenture and KPMG or Deloite are the big guns. Search out some of the smaller consulting groups as well.

You might also want to contact companies with strong track records in human capital management and inquire about positions. Nordstrom's, Federal Express, Lexus and General Electric are a few names that come to mind. CDW is moving in this direction.

Read the journals to see who are doing excellent work. Don't hesitate to contact them. If you have limited experience, go relatively cheap to gain experience. You can always renegotiate based on your demonstrated worth to the organization.


What basic elements should be included in a "training plan" for a job in which the development of technical and soft skills are required?

Training plans come in so many varieties that it is hard to respond specifically. The best practice approach is to build a performance map (a job analysis) of the job. This includes the major performances to be demonstrated by the incumbent and then broken down into successively more elementary components. It resembles a task analysis. The next step is to assess each individual based on the performance requirements of the job. This means that each person within a given job classification can (and really should) have an individual learning and development plan.

Finally, you have to determine where the gaps lie and which ones are skill/knowledge deficiencies. At this point you can create the training/learning/development plan.

I like my training and development plans to contain five components. Based on the job, I start with Essential Skills and Knowledge Development. These are the ones absolutely required to perform adequately in the job. These are comprised of three sub-components: technical, conceptual and interpersonal. The higher up the management chain the person is, the more conceptual and interpersonal skills and knowledge are required. But once again, it depends on the assessment of each individual. One person may need greater technical skill development, another more interpersonal. Believe it or not, we sometimes have to add "Social" or even "Public Relations" skills depending on expectations of the performer. The next set are the Development Skills and Knowledge. These include those skills and knowledge that help to shore up both strengths and weaknesses. We want our performers to strengthen what they are already good at and develop more in those areas where weaknesses will be detrimental to expected and future performance. Finally, I like to include a section called Personal Growth Skills and Knowledge. This is where each individual can devote some time to explore avenues related, but not central to, the current job. They are career enhancement pursuits and must be approved for the organization to support these.

With respect to how one develops these, a variety of alternative means can be specified. These include, but are not limited to: internal training sessions, task force or committee participation, joining a mentoring program, reading, temporary assignments, outside courses and degree programs, participation in projects, special assignments, self-monitoring activities, and even having lunch with certain people or going out on visits.

The plan should include not only a specific plan of action, with specified activities, but also a Timeline and Agreed Upon Review Points with accountabilities.

The plan, as you can see, includes training elements as necessary, but is far more of a development path. That is appropriate in this age of knowledge workers in which lock-step routine jobs are largely disappearing. A plan should take people, in light of what is expected of them, from where they are to where we - and they - would like them to be. No two persons are identical. Neither should be the plans to help them perform optimally.


What is the difference between a competency and a skill?

A competency is an ability to perform that is required by a job. You always create competency models based on named jobs. Hence, competencies are derived based on a set of external requirements. A skill is something that an individual is able to do. We can inventory and assess the skills of people. Imagine that we want to determine what skills you possess. We may identify a broad range of these including calculation skills, analytic skills and a number of physical dexterity skills to name a few. Now imagine that we are hiring circus clowns. We can create a competency model based on exemplary clown performers. These are what we consider are needed to perform well as a clown. We specify these competencies in a job search and then match the skills of job candidates against the competency requirements. We can also hunt inside the organization to discover whether we can develop internal candidates with skills that show potential to meet the competency requirements of the job with training. You may have skill in juggling balls. It is irrelevant for most jobs. However, it may be a competency requirement for the clown position. To summarize, skills are what you have - the ability to do things. Competencies are what a job requires.

Please don't confuse either of these with characteristics, knowledge or values. These are different. Too many problems arise when we mix up all of these terms and define them poorly. The general rule is hire for characteristics and values as these are extremely difficult to develop or alter in a person. Train to build skills that match competency requirements.

Here is a link to a brief article I wrote that is somewhat relevant to your question: http://www.nxtbook.com/nxtbooks/mediatec/tm0707/ (article on page 16).


In what kind of company would the technical skills of top managers be more important than human relations or conceptual skills?

This a tough on; there is no fixed rule for this. However, in general, small, entrepreneurial companies that have some form of cutting edge technical capability can often grab a lot of new business because of the leaders' recognized technical competencies. The innovation excites customers and the demand is such that business acumen, interpersonal skills and even communication or marketing skills are of lower priority. Often, the organization takes on the form of a cult with the leaders as technical gurus surrounded by dedicated disciples who work crazy hours to build success.

As the organization grows, this highly personalized, technically focused adventure begins to outgrow the initial visionary and innovative thrust that launched it. Reality and day-to-day concerns emerge. To survive beyond the launch requires more mature conceptual, business, marketing, financial and interpersonal skills. Other players enter the market and the cutting edge uniqueness of the service or product now faces competitors. Demand softens and the realities of the competitive marketplace begin to diminish the focus on the technical aspects alone of the enterprise. Financial supporters have to be wooed and appeased; new people who are brought in have expectations that must be met beyond the great vision of the start-up. The leaders find themselves faced with the hard-core and mundane requirements of running a business. Not all visionary and technically brilliant entrepreneurs can make the shift.

Hence, to finally answer your question, those companies that have caught fire because of the technical brilliance of their leaders (e.g. General Electric, Hewlett-Packard) come to mind first. Those companies that are largely technically driven also require leaders who are highly skilled in their technical areas. Professions such as law, accounting, medicine and certain fields of consulting, including Human Performance Technology, certainly must have technically capable leaders to inspire and generate credibility in the marketplace. However, in all of these instances, as maturity and growth occur, other skills must be added. Sustainability requires that conceptual, interpersonal, business and a wide range of organizational and management skills be present. Without these in the leadership (or if the leaders are wise, then brought in to help in decision making), the company is unlikely to prosper long term. Technical skills may be necessary. However, in the long run they will probably not be sufficient.


I'm a training professional working in the pharmaceutical industry and am seeking a basic competency model detailing what performance looks like for classroom trainers. Ideally the model would describe performance at three different levels: basic/fundamental, intermediate and advanced. Can you point me in the right direction?

There are many beliefs and models out there for you to pick and choose from. However, if you are looking for one that is structured, has a long track record, is both process and performance centered, and has been used with a large number of organizations, I recommend the program provided by the Canadian Society for Training and Development (CSTD). Here is the link, http://www.cstd.ca/certification/index.html. This is a program to certify trainers at increasingly higher levels that has been around for many years and has had solid success. I believe that CSTD offers a variety of arrangements for organizations.

Another example of a series of graduated modules for building ever-increasing training capabilities is the Trainer Competency Track (TCT). The link to this is http://www.nystc.org/committee/minutes/pdt/competency.htm. It is arranged in five levels and, at the top, includes performance consulting, organizational development and many other areas for professional growth. You'll have to do some investigating to learn more about the modules and the model that underlies it. My knowledge is only cursory concerning the programs.

Another "model" is the one devised by CHART, the Council of Hotel and Restaurant Trainers. CHART is an industry-wide non-profit group that encompasses most of the hospitality organizations and has been around for years. As you can imagine, this is an area in which a huge amount of training occurs. Here is a link to a "white paper" about the CHART model: http://batrushollweg.com/Insights&Research/WhitePapers/BlueprintForTrainerDevelopment.pdf

I could pepper you with a host of models. Many are industry specific. I leave you with a link to the directory of the Society of Pharmaceutical and Biotech Trainers, http://spbt.klickit.com/local/files/vendor%20directory/vendorproj_proof2.pdf. It has a section of trainer programs arranged in a hierarchical format. The Society itself has tackled issues such as trainer development, performance improvement and competency modeling. As this is closer to your organization's universe, I recommend connecting with SPBT and getting your hands on what it has produced.


I'm putting together a pre-test and post-test for one of our training programs. It will be given to some, but not all of the participants to determine if the participant knowledge increased as a result of the training. Since the training we do is not based on clear, measurable objectives, the only test I can create is a knowledge test of the content that is covered. Do you have any information on when a pre- and post-test is effective and when it is not?

First, let's define pre-test and post-test. A pre-test is a post-test equivalent that is administered prior to submitting "subjects" to a "treatment." In the training world, this means that you create a test based on what it is that trainees are to master as a result of the training and administer it before and after the training. This allows trainees who have already mastered portions of what is included in the training to skip these. To ensure that trainees have not memorized the responses from the pre-test when you administer the post-test, you should create a different but equivalent set of test items for each of the tests. To do this requires some knowledge of both test item construction and a bit of statistics to ensure equivalence of tests. Usually, we construct test item banks and then draw from these at random to ensure equivalence. It may sound complicated, but it is not so bad. You need to have someone help you with this until you know what you are doing.

Now for the more important matter. Training has long history of being criterion-referenced. This means that specific and measurable criteria for mastery are established, normally derived from performance-based task analyses that lay out the required outputs and outcomes of a job (or set of tasks within a job). Training is designed to ensure that trainees meet these criteria. The criteria are expressed through verifiable objectives that are trainee centered, contain the verifiable performance and state a standard of acceptable performance given the level of the trainee group. Test items are created to perfectly match the objectives. Our book, Engineering Effective Performance Toolkit, details all of this with lots of job aids.

If you do not have performance-based objectives, it is difficult to create any kind of valid test. In addition, testing declarative (talk-about) knowledge of a content area that requires procedural (doing, using) knowledge is of little use except to say, "I tested them." There is very little correlation between being able to talk about what you can do and being able to actually do it. The declarative knowledge test will only be useful for those portions of the content that are to be remembered verbally. They have very little impact on performance results (e.g., talking about the features and benefits of a product versus actually being able to demonstrate the product in a convincing manner).

As for sampling trainees, that is fine if the sampling is random and meets sampling criteria. You can find guidelines for this on the web or in books on testing and sampling.

To conclude, what is the purpose of the activity? If it is to show that there are knowledge gains from a course so that someone can say,"See, they learned," then the approach being taken will work. Trainees will probably remember something that they did not know before. If it is to verify whether or not the training had an impact on their ability to perform, then the results of such an exercise will not be helpful.

As a side note, we are currently conducting a research study on transfer of training. As has been found elsewhere in many studies, declarative knowledge learning and retention is high. The trainees learn and retain enough to do well on the tests and significantly better than on the pre-tests. However, when we examine application on the job, the portrait changes markedly. They may "know," but they do not "apply." We are studying the variables that facilitate or inhibit on-job transfer.


Where can I find current research on transfer of training? Who are its key researchers?

Transfer of training - transfer of learning is a huge and multi-faceted focus of interest for different kinds of researchers. In 1997, I edited a special issue of Performance Improvement Quarterly on the subject. Since then, research has continued. There are theoretical questions embedded in the transfer arena as well as practical ones. Some researchers are interested in the mental mechanisms of transfer, others in the external factors affecting transfer of training to the workplace. Some are mainly interested in children or students and school learning whereas others are concerned about workplace and return-on-investment issues. And when we examine transfer, are we concerned about lateral transfer, vertical transfer, near or far transfer, or fluid or crystalized transfer to name only a few of the types of transfer researchers study. In the Performance Improvement Quarterly issue I mentioned above, Achi Yapi and I reported on a study we conducted in Ivory Coast on the effect of behaviorist and cognitive based cases on transfer of learning (Stolovitch, H.D. and Yapi, A., 1997. Use of case study method to increase near and far transfer of learning. Performance Improvement Quarterly, 10 [2], pp. 64-82). The hypotheses we stated were confirmed by our study.

To come back to your question, I strongly recommend that you read broadly on the topic. For this, conduct an online search to identify articles dealing with more general facets of transfer. Use these to help define terms and identify research agendas as well as key players. The most promising databases are ERIC, Psych Abstracts, Dissertation Abstracts and Current Index to Journals in Education. I especially like to pore through Review of Educational Research.

Notice that I am not giving you fish, but rather am suggesting ways for you to do your fishing. I am not current enough on the latest research in this area and so I think you are best served by delving into the literature on your own.

 


How can the use of effective job feedback designs or job aids replace on-the-job training? Under what conditions would formal training be a better solution?

You pose your questions as if we might be pitting job aids against on-the-job training and both against some form of formal training. I suspect that this is not your intention. Rather I think that you are looking to discover how these three all fit together. And, yes, they all do. Job aids are basically external memory storage devices. They come in many varieties, but all do the same thing. They guide you to perform without your having to remember all of the steps, decisions, or details. They all range in complexity and sophistication from very simple ones such as an address book that remembers phone numbers and addresses for you to much more complex ones that can guide you to correctly calculate your income tax or make an important technical or buying decision.

Examine an office photocopy machine. You will see a small screen that helps you select the right paper, choose whether or not to collate, staple, or sort, etc. Based on what you want to do, it lists the steps to follow. If you make an error, it helps you fix it. I use a paper job aid that I keep in my wallet for making phone calls when I am traveling abroad. It lists countries, matches them with access codes and numbers, and provides me with costs per minute. It even has a little troubleshooting guide to help me out if I cannot reach my party.

On-the -job training (OJT), for the most part, is informal. A person is assigned to the mail room or the warehouse, where she or he is placed into the hands of an experienced employee who "shows her/him the ropes." How organized this type of training is depends on the organization and its managers. OJT can include shadowing an experienced worker, having a work buddy assigned, or simply having access to people who know how to do "it" when you get stuck. There are, of course, more formal structured on-the-job training (SOJT) programs for which materials, trained workers in SOJT practices, and evaluation systems have been implemented.

All of these work to a greater or lesser degree. The more informal the OJT, the less certain you are of the effectiveness of what is taught. OJT, however, is the most common form of training to be found in most organizations. The U.S. Department of Labor Statistics estimates that four to five times more time and money is spent on informal OJT in American business and industry than in formal training programs.

Helping someone to learn on the job works. So do job aids. Job aids are best suited for providing information that a worker must access but would have difficulty perfectly retaining due to excessive detail or infrequent use (e.g. hazardous materials and how to recognize each and every one along with what intervention is best when something goes wrong; sequence of steps for restarting an assembly line after troubleshooting). They are also useful for making decisions when a number of variables have to be considered.

My recommendation is not to substitute job aids for OJT, but rather to use them together. Create excellent job aids and have experienced workers demonstrate their use with novices and then have the novices practice using the job aid under the watchful eye of the OJT coach. This improves consistency of performance and decreases idiosyncratic ways people develop for doing work.

As for formal training, the general rule is if there is a need for specialized skill and knowledge on the part of the trainer beyond what a skilled worker possesses, then formal training sessions are probably best. Once again, these can be combined with OJT. "Theory" is best done outside of the immediate work environment. Accurate and consistent rules and principles are best taught in more formal settings (although the sessions may take place at the work site).

To sum up, one is not a direct substitute for the other. Provide formal training to lay the foundations for job skill and knowledge, advanced knowledge and skill building, and legal reasons. Use formal training (live, self-paced or virtual) to allow workers to try out new skills in settings where they will not disrupt the work flow nor be distracted by what is happening at their work sites. Use OJT to ensure workers are applying their skills and knowledge appropriately. Create job aids for tasks that can be done by following the steps or applying the job aid to specific work tasks (e.g. a color chart card to lay against hydraulic fluid in a glass tube to determine what action to take). Have these introduced formally or on the job and provide OJT to practice appropriate use of these job aids.

Our book, Beyond Training Ain't Performance Fieldbook, has more about job aids, how/when to create them, and what varieties exist.


Is there a method to determine if an individual's mindset is activity-based versus results-based?

The most direct way of doing this is to provide sample cases to the individual and ask him/her to describe (or work on if there is time) how s/he would deal with it. Give the person free rein and encourage him/her to deal with it if there were no constraints. This is for those whose work you cannot access. You can also ask the person to describe a performance problem from the past and have him/her present what s/he did to solve it. See literature on performance-based interviewing.

Another alternative is to provide a case to the individual with two alternative approaches, one activity-based and the other results-based. Have them select which one s/he prefers with a rationale. Of course, you have to be careful not to make the preferred one obvious.

If this is someone who is working inside your company, ask to see work samples. Obtain documents and verify with clients what was done. Probe for the rationale behind the approaches taken. Factor in organizational or client constraints.

For me, performance-based interviewing and hiring offers a higher probability of success in candidate selection than any other method I have encountered. Don't worry about "mindset." Focus on what the person actually demonstrates to you when faced with an operational issue. After a couple of cases, you should have a pretty solid portrait of how the person goes about performing.

One note of caution: since people who are being tested usually want to do well, you have to avoid building in bias toward an activity versus results orientation. If this is an internal person who knows that the organization prefers or in the past has favored a given approach, then s/he may opt for it to "look good."


In which types of companies are technical skills more important than human relations or conceptual skills for their top managers?

The answer is in very few, if any. Perhaps a very well-known and well-established research and development company might qualify or a company that is engaged in technical activities related to the military or other technically strategic areas for which there is little or no competition and are subsidized by government funding (e.g. Livermore Labs; Sandia). Realistically, however, top management has to "have it all," but not necessarily equally in all of the members of the senior management team. The leader may be strong in the financial areas and various members of the company's leadership may have differing strengths. Together, like a sports team, they integrate their capabilities to do a great job.

That said, a top leader must possess vision, a sense of the market and have connections with investors as well as means for inspiring his/her team, shareholders and key personnel. The head of the company may not be pleasant (e.g. Disney's former CEO, Michael Eisner; Rupert
Murdoch) but they do have vision and a strong conceptual handle on their markets as well as their capability as an organization.

To conclude, while technical skills can be extremely beneficial for top managers, especially in highly technical fields, they are insufficient (except for the rare cases I cited, and even then...) to ensure company success long term. Conceptual skills are essential to make the company prosper. The ability to inspire, lead or drive people to perform and recognize and reward high performance are essential for company leaders. I would guess that an examination of the most successful companies will show that in most cases, the top managers are not the most technically qualified. Rather, they are the ones who find the technically talented and draw from them their very best.


I am teaching a class in training and development and am using your books, Telling Aint Training and Beyond Telling Ain't Training Fieldbook for my graduate students in a Master of Arts program. On page 141 of the fieldbook, you advocate selecting training topic assignments and then state "Distribute a 10-minute section of the training program to each participant." Where might one find appropriate topic assignments? Could you suggest several topics that have worked for you in the Video Practice Session (VPS) and provide an example.

The VPS is like a shell. Any content that the participants normally teach is usable and can be placed within it. If a new course or program is being introduced, you would chop up the new program into discrete 10 minute chunks. Here are examples from some sessions:

  • A real estate company introduces a new program for certifying agents. The trainers are introduced to the program and trained on it. They then receive slices of the program for practice during the VPS.
  • Trainers are training in car dealerships. They are teaching about Warranty Claim processing. Again, they go through the program as learners and then examine the leader guide, are assigned parts of the course and teach a 10 minute slice during the VPS.
  • Recently, I had Store Managers from a successful bookstore chain. They developed their own 10 minute slices on things they teach to their staff. Topics consisted of: selling the customer card; inventory checking; book location; ordering books; handling customer complaints.
  • When I had a group of people who were not yet training in an organization, I had them select small topics for which they had expertise. Some examples were: decorating a cake; checking oxygen tanks before scuba diving; solving a math puzzle; how to read resistor codes; using a slide rule and other similar fun topics.

The key is for them to have a model for training, such as our Five Step Model in Telling Ain't Training, receive coaching and have opportunities to practice before they are videotaped. Then they are placed in a VPS setting. We find the VPS most useful when a company is introducing a new program to its trainers and after taking them through it, slice up the program and have them teach parts of it. It is also very useful to bring experienced trainers in for a training clinic. Have them selectt a part of what they already train and then have them go through the VPS for coaching.


In the world of Competence Assurance and job profiles, what percentage of total tasks/procedures must a candidate be assessed against to assure competence? Is a representative sample of critical tasks sufficient or should you assess against every task/procedure available?

This is a knotty issue. It may sound like a cop out to say, "It depends," but that is the only response I can offer. The more risk that is involved in the job (for example, Bomb Disposal Technician), the more you must test for "everything". The reverse is also true. If the job is one of low risk (for example, magazine sales person), then it is fine to sample only those tasks critical to the success of the job. Here is what I recommend. Perform a job analysis that systematically lays out all of the job tasks. Go at least three levels down: major tasks, sub-tasks and sub-sub tasks. Using experts who know the job well, top performing incumbents and, if feasible and appropriate, "customers" of those in the job - internal or external - have them rate each task on three dimensions: frequency, importance and risk. Use a simple rating system of High, Moderate and Low. Any task that receives three Highs should definitely be assessed. Any task with two Highs or two Moderates should also probably be checked. Feasibility and time become issues here. You can probably let any task with two Lows or more and no Highs go. This is not a scientific procedure, but does allow you to make assessment choices based on credible input. I do have one caution. If any tasks have legal or safety implications, assess for these.

On a side note, I am not a big fan of "competency" assessment. They are suppositions. We suppose that these competencies will produce desired results. I prefer Performance Modeling. Here is my explanation. Let us assume that you are assessing those in an automotive sales position. A competency model might include, "excellent oral communication." The thought is that a car sales person should be able to speak clearly and talk to the customer in a convincing manner. Perhaps this is true. Perhaps not. Is it possible to determine a customer's needs and "hot" buttons and make a sale without being a wonderful speaker? The answer, based on our studies, is "yes." The performances we are looking for are overt and verifiable and lead to job success in demonstrable ways. Performance is valued accomplishment derived from effective behavior. Both are overt and can be empirically assessed. Competencies are assumed to be correlated with successful performance. Therein lies the danger.


I would like some advice on paper versus electronic job aids. I'm concerned that paper gets outdated and people still hold onto the old information, even when updates are provided. The electronic format is good, but it's not as convenient so may not be used as much. Please share your thoughts on this subject?

Paper or electronic, both are media and each has its place in the array of job aids we use. Part has to do with the use people will make of the job aid. If you are baking a cake or making a stew and have stuff all over your kitchen counter, a paper cookbook or index card is probably more convenient than an electronic device. Similarly a job aid pasted to a telephone with emergency numbers is more practical than some electronic instrument. On the other hand, job aids for trouble shooting equipment are more helpful in an electronic mode, particularly if they allow access to databases and expert advice to deal with highly variable circumstances.

So how should you decide which medium to use? Here are some rules of thumb:

  1. For simple procedures that do not change much over time, paper job aids work well. They also tend to be more cost effective and can be carried around, pasted up or mounted in a prominent place very easily.

  2. For complex procedures or requirements to access a variety of procedures, some of which are complex, electronic job aids are generally better.

  3. Consider the level of technical sophistication of the end-users and their preferences. Some people cannot adapt to PDAs. They prefer paper daybooks and planners. For less technologically experienced users, paper job aids are generally easier to use.

  4. Consider the environment in which the job aid will be used. In some, electronic devices will not work well (e.g. in an underground tunnel; at a buzz saw). In other environments, electronic job aids are by far superior (e.g. inside the cockpit of an airplane; at a terminal). Analyze the implementation context to determine which medium is more compatible with environmental characteristics.

  5. Consider the volatility of the content. The more frequently the content of the job aid is likely to change, the more appropriate electronic job aids are if there is a means to update these automatically. If not, they are no better than paper ones.

  6. Finally, consider cost. If electronics are at hand and are cheap and easy to use, then that's the way to go. If some of the end users will not have access to the electronics and you can produce a paper-based job aid quickly and cheaply, then paper is your best bet.

The bottom line is that practical considerations reign when it comes to selecting the appropriate job aid medium. Implementation variables usually dominate here. The most important aspect of a job aid is its design for accuracy, clarity, ease of use and accessibility. Focus more on these than which medium is better. The right choice will naturally emerge.


I am in a class on Entrepreneurs and we are required to put a business plan together. I will be putting together a plan for my newly formed performance consulting company and part of itn requires us to put together our pricing. Do you know where this information might be available?

There are no published figures on what individual private performance consultants charge. A lot depends on their noteiety. I know several who charge $6,500 per day and more. Others are at the $300 per day level. Three possible sources are:

  • ASTD State of the Industry, 2003. In that year, ASTD published salaries for internal performance consultants. The range for 2002 was $65,218 to $124,200 with a mean of $80,671. The sample size was 276.
  • The second source is the Instructional Systems Association
    (ISA). However, they only tend to share this type of information with their members.
  • ISPI in Washington. They may have some data on fees.


What is conceptual skill and why is it important to top management?

The term "conceptual skills" is a convenient one to describe the general analytic ability of people to formulate and deal with ideas. Conceptual skills are used most frequently as part of an overall description of what managers require to do their work effectively. Often, you will run across the following:

To be an effective manager you require three major sets of skills. The first is "technical skills," which involves process or specific abilities to perform tasks related to the job. This is also known as "hard skills." The second set is "human skills" and centers on the ability to interact effectively with people. The third is the "conceptual skills" set that deals with the formulation and manipulation of ideas. This includes the ability to work with abstract notions and relationships and to do creative problem solving.

While this brief response probably does not do the term full justice, there are those who have specialized in this area. Skills of an Effective Administrator by Robert L. Katz is considered to be a Harvard Business Review classic.


Is there any research that shows that learning transfer happens more effectively and efficiently when inside trainers are used versus outside consultants? We are implementing a new IT system and our COO thinks it is more effective to use the new systems trainer who will learn our processes and teach the new workflows, versus a T3 approach using trainers from each department who have cultural knowledge and understand the work and the application of the new system in their environment.

There is no research on this issue. It's not a researchable question and here's why. There is a whole host of variables that affect the impact of a training program. Many of these have nothing to do with the training itself as demonstrated in my book, Training Ain't Performance. There is also the nature of what is to be learned and the experience of the learners with respect to this content. Finally, the characteristics and capabilities of both inside and outside trainers affect the learning and performance results.

More important than who does the training are three essential activities on which you should focus:

  1. The front-end analysis to identify all of the key variables that will affect learning, motivation to apply and actual competent application on the job.
  2. The design of the instruction itself. If it is well designed with lots of practice, contains job aids that can be used in the work context and includes cases and/or simulations that mirror actual work situations, the probability of success rises regardless of where the trainer comes from.
  3. A really good train-the trainer preparation that focuses on transforming and not transmitting, that gets the trainers ready to work the solid instructional design and that emphasizes how to provide feedback and encouragement. In this instance there is an advantage to using well-selected internal people as trainers. You develop internal skills and then can convert the trainers into on-the job coaches and consultants post-training.
  4. An excellent support system. Get the supervisors up to speed to encourage and assist performers. Have them insist on appropriate application. Create a temporary technical support hotline for those in need post-training.

It's not where the trainers come from that is the issue. It's the system you build that results in desired performance valued by all stakeholders that counts.


When/under what circumstances should I use a RASCI model and how do I use it?

The RASCI chart is a planning tool. This means that you should use it at the very beginning of a project once you have determined its size and scope and who the project "players" will be. In our learning and performance improvement project planning work, we usually begin with a timeline that lays out all of the tasks to be accomplished and the estimated number of person days each will require for completion. Once the timeline is established, we then turn to our RASCI chart to list the same tasks as in the timeline, list the project participants, determine the roles each person will play for each task and - our innovation we believe - the estimate of time required for the person to execute his or her assigned role for that task.

There is an interplay between the estimates in the timeline and the time assignments in the RASCI chart. They ultimately have to match. You adjust back and forth until they are perfectly coordinated.

The meaning of each of the letters is as follows:

R = Responsible (the person who must ensure that the task is completed)
A = Approval (the person/s who must sign off on the task)
S = Support (the person/s who can help/facilitate in the execution of the task)
C = Consult (persons you can turn to for expert information and advice: technical, administrative, legal, marketing, instructional)
I = Inform (persons or groups who must be kept up to date on what is happening in this task).

Every step requires an R. Many steps require an A, S or C. I is only noted if someone requires information on the project tasks but plays no active role (e.g., a manager; unions). Within each step, estimate the number of hours (or days) it will take each participant to perform her/his part. Enter the estimate in the lower right box for each task/participant. Here is an example of a RASCI chart filled in.

Project RASCI Chart (time specified in person hours)

When you have completed the chart and it matches the timeline as well as the expectations of the organization, you can use this to inform all players about the total amount of time each will have to budget over the life of this project. You can also generate individual calendars for each of them with the dates on which they will be required to work on the project and the specific task. We generally present the RASCI chart during the Project Launch meeting - a must for successful projects.

We (my colleagues and I at HSA) have found that when they are done well and approved by the client/organization, the timeline and RASCI chart make project management so much easier. We usually assign someone we euphemistically call the "project nag" to manage the timeline and RASCI chart. This is often an intern or junior team member. The purpose is to provide this person with a developmental experience. She or he keeps people on track and reminds them of due dates. No one may argue with this person. This adds a layer of protection to the project manager and usually results in the work getting done on time.


I'm working on a performance improvement project and just finished the cause analysis. The gaps I found fall mostly under trust issues and open communication. Should I concentrate on the examples and factors of why that trust is not there or should I state that lack of trust is the reason they are having these gaps? Also, when looking at the Behavioral Engineering Model, where would lack of trust fall under? Should it go under information or capacity?

It's great that you uncovered these issues. The first cut is to state that key factors inhibiting performance are lack of trust and stifled communication. The impact they have lowers the motivation of performers to engage in whatever it is that is being sought. I would continue to dig more deeply and state specifically what the lack of trust and poor communication indicators are and the impact of them. Gather data to support your claims so that no one can argue that this is "just your opinion." Obtain solid evidence.

With respect to the adapted Behavior Engineering Model, I would place these in the "motivation" box. Why? Because motivation is affected by value, confidence and mood. All of these factors can raise or lower motivation to perform. Mood, in a work setting, is greatly influenced by the atmosphere of the workplace. Intimidation, perceived threats, lack of trust, a sense of unfairness and a fear to communicate openly can create a negative "mood" in performers. They back off or lie low. Negative factors have a stronger impact on motivation than do positive ones. In other words, negative factors lower motivation and performance more rapidly than positive factors raise them. We get "bummed out" very quickly and withdraw from the game. Appropriate interventions will fall under the category of "motivational systems." These include elimination of the issues underlying the lack of trust or in dealing with them squarely as well as dealing with those related to the communication barriers.


Do any of the consultants you work with who are internal (company) consultants report difficulties in getting closure with their internal clients? We currently have some 19 projects in process but are having difficulty with some when it comes to closure. It seems that other priorities delay meetings, implementation activities, etc. If we were external and continuously invoicing clients for hours worked, it would probably be a different story. Any thoughts or suggestions?

Internal or external, it doesn't seem to make a whole lot of difference. Most of the time, we work with the internal people so that both of us are affected. In addition, internal people get paid regardless of delays whereas the externals can't bill which leads to devastating repercussions for their employees and freelancers.

Having established a band of brothers culture between us, let me turn to some suggestions. These are based on our own practices and do help to some extent:

  1. Create contracts with your clients in the same way that external consultants do. Establish clear expectations with respect to deadlines, penalties, etc. Become a consulting and service provider to your clients using the same tools as your externals.
  2. Find a senior level champion with political clout. Take her/him to lunch. Explain the problem. Build a business case with examples and show what this is costing the company. Work with the senior manager to put reasonable pressure on the clients.
  3. One of the biggest problems we all face is accessing busy subject-matter experts (SME) or key sign-off managers. This can make things drag intolerably. Along with your initial project timelines, create what we call RASCI charts. These identify all project players, note which role each one plays at each step of the project and estimates time requirements for each task individual players are responsible to perform. This informs them of their time commitments over the life of the project. If anyone has difficulties, problem solve with him/her and identify alternate personnel to replace parts of their tasks. Use this along with your project time line to generate individual calendars so that each person knows exactly when he/she has to do whatever is required. This works very well.
  4. Assign a "project nag." Yes, you read it right. At the project launch meeting, gather all the players (including client representatives) to go over the project , identify potential delay spots and create means for overcoming these. Then, get everyone to accept an identified individual (we often use an intern or a junior person) as the project nag. No one may argue with, insult, pull rank on, etc. this person whose responsibility is to inform people of their due dates and how their delays may affect others. They literally nag people, reminding them of commitments. It's great for the team because they have other duties and forget project due dates. This keeps them on task. It helps the project manager who can focus on project matters and not on negotiating new timelines for individuals. It is a wonderful development experience for the person tasked with project nagging. We have found it very successful.
  5. Track costs of delays and regularly inform team members and clients of them. Copy the senior manager on your memos. Money often talks.
  6. If you know of some people who in all probability will cause delays, create artificial deadlines for them. Hold them to these deadlines, painfully retreating from them to the real deadlines with much hand-wringing.
  7. Finally, find an active sport where you can take out your frustrations. I run long distances. It helps soothe my troubled spirits when those **@*&%^$$^&! project delays occur.


At the ISPI Conference in Chicago this past fall, you stated that learning styles are pretty much discredited now as they only account for 2% of learning effectiveness. You said there was no firm evidence to support them. Can you give more information on any academic literature to back up what you're saying? I agree with you but have colleagues who strongly disagree.

I mentioned two different things. One had to do with sensory modalities. Some people are more visual than auditory...or kinesthetic. If you explore the literature on what are called ATI studies (aptitude by treatment interaction) you will come across significant differences with respect to favoring one sense over another, but there is not enough "power" in this difference to be overly concerned compared to many other variables that affect learning. That's why I recommend stimulus variation. This is pretty potent if done well and meaningfully.

This is different from "learning styles." Here is a previous question and answer from the Ask Harold Archives:


We frequently hear the term "learning styles" tossed around like "I know there are tests for people to identify their learning style" and so on. People use this to argue for one delivery media over another like "My learning style prevents me from engaging in e-learning!" I would like to know if there are any scientific grounds for the existence and definition of learning styles. If so, what are they and what are the main messages coming out of this literature?

There is a lot written on learning styles and a lot more folklore circulating about it. Here are two useful articles that deal with your question. The first, McLoughlin, C. (1999). The implications of the research literature on learning styles for the design of instructional material, Australian journal of educational technology, 15 (3), 222-241, provides a good overview of the main currents and definitions of terms that are similar, yet different. These include: learning preferences, cognitive styles, personality types and aptitudes. It also examines two main learning style theoretical approaches, one that divides learners into wholist-analytic versus verbaliser imager and the other that suggests four categories: activists, reflectors, theorists and pragmatists. The second article, Stahl, S. (1999). Different strokes for different folks? A critique of learning styles. American educator, 23 (3), 27-31, questions the validity of the learning style construct itself. In the article, Stahl examines learning style inventories and questions their reliability.

My take on all of this is that there are individual differences that affect the way we learn. However, there are also many rules with respect to learning that apply to all of us as human learners. While it may be useful to factor in variations in learning approaches, it is probably more useful to apply those universal principles that research on learning consistently suggests result in higher probabilities of learning. What are they? Six simple ones:

  • When learners know why they are supposed to learn something (a rationale with a credible benefit to them), the probability of learning and retention increases.
  • When learners know what they are supposed to learn (a clear, meaningful objective), the probability of learning and retention also increases.
  • If what is to be learned is clearly structured and organized so that the learner easily sees the organization and logic, again, learning and retention probabilities increase.
  • If learners have an opportunity to actively respond and engage in the learning in meaningful ways, once again, learning and retention probabilities increase.
  • If learners receive corrective and confirming feedback with respect to responses they emit or activities in which they engage, their learning increases along with retention.
  • Finally, if the learner feels rewarded for the learning, has a sense of accomplishment or is given an external recompense for the learning that he or she values, learning and retention have an increased probability of occurring.

With respect to being more visual or auditory, while there may be significant differences among learners, more important, however, is stimulus variation.

Concerning the use of media or self-pacing, issues about learning tend to focus more on the design of the mediated or self-paced material than on the medium itself. Richard Clark has written extensively on media not being the message. If the mediated and/or self-paced material follows the universal rules and is well supported, it will work. Some learners who are not used to learning on their own may require additional support and control. Distance universities, such as the British Open University or Athabasca University in Canada, have learned how to do this well.

So, to conclude, there is a lot of folklore and some science concerning individual differences in learning. Best to apply universally sound methods to enhance learning. Vary activities to maintain interest and attention. Provide support and control mechanisms to help learners "stick with it." This way, you address all learning styles.


Learning packets are one way in which our employees receive education on new topics. We struggle with finding a way to consistently and realistically determine how long it should them to finished the packet. Sometimes they are doing them at home and then they are paid for the time. So pay becomes a part of the scene and can cause problems when people feel the time estimated to complete the packet is in error. We also offer classes that require study time out side of the classroom. They are also paid for this time. We struggle with determining the correct time for this work as well. Any insights or experiences you have had with determining independent study time? Some staff feel they are not given enough time while others seem to need less. What is the most equitable way to determine these time frames?

There are several methods for doing this, one of which is somewhat radical. A simple method, once the package is complete, is to ask a few (4-6) learners to go through it while someone who acts as a proctor is present. Time the learners and establish an average time. Of course, your learners for the trial should be representative of the range of learners in the entire population.

A second approach is to establish a time for each section of the packet (also based tryout testing) and to state up front that this learning package should take "X" number of minutes to complete. Create a matrix that contains the various sections and suggested time allotments. Explain that you are providing this in the interest of the learners so that in advance, they can budget sufficient time. Explain that this is based on tryouts and that it is 20% longer than the average time taken. Please make sure that this is true.

A third, more radical approach, but nevertheless perfectly acceptable, is to state that the job requires continuing education and that there are periodic tests which employees must complete. The results are placed on their records and considered an essential part of their performance evaluation. The requirement is for employees to pass the test. If they desire, they can request a self-study packet to prepare. It is free and contains all the content and sample tests to help prepare for the official test. This places the responsibility to learn on the learners themselves.

This last approach is more radical in that the requirement is not the training, but the passing of a test. This requires a system that makes continuing education essential and valued for the job and truly rewards success. You can even have it linked with professional CEUs from various organizations so that the learning and testing is officially and externally recognized.


What is the difference between efficiency and effectiveness?

Efficiency is productivity with minimum waste. It is the achievement of maximal results with the least amount of resources and energy expended. An example would be your attainment of a goal in the least costly manner with the least amount of your energy and resources used. Highly skilled persons usually perform tasks that achieve high results with far less effort than novices. Watch an expert swimmer do laps in a pool. She or he seems to glide effortlessly back and forth. Caloric expenditure is minimal compared to the novice who splashes around and is soon exhausted doing far fewer laps.

Effectiveness is achieving the desired result from the expenditure of resources and effort. Think of a medication for an ailment. If it is effective, it will cure the ailment. An aspirin for the common headache is a simple example. Taking a cough medicine for the headache would be ineffective.

In learning and performance, we strive to create interventions that exhibit the two characteristics of effectiveness and efficiency. Suppose a person had to be able to dial the correct number for a given emergency, but the number varied depending on the nature of the emergency. You could drill the numbers into the person's head and make him or her repeat these until a perfect result was achieved. It would be effective, but not necessarily efficient. Another way might be to provide a telephone with clearly labelled emergency numbers on it and spend a few minutes training the person on how to press the appropriate buttons. Hence, for a given emergency, the person would simply press the right button. Much more efficient than all the drill and practice and the possibility of error over time.

Remember, determine what will be effective first. Then determine the most efficient means for getting there. Focus effectiveness on results and efficiency on the means for attaining them.


I read your article "Keys to Performance Consulting Success." However, it still seems like there's a huge canyon to bridge between doing training-only and performance improvement. You have to acquire the skill set (hire new people) to get the clients for performance improvement. How do you justify acquiring the skill set if you don't have the clients?

I would imagine that you already have clients asking for your services. I am guessing that they are requesting "training." Your first step is to reassure them that you are there to help. Your response to their request is, "I can help you solve your problem." Then you begin probing for the true business need: "Imagine that whatever we do for you works perfectly and you're totally delighted. What is different in this scenario from the ways things currently are being done and accomplished?" By documenting the ideal and current you will begin to see (and perhaps so will your client) that there are other factors requiring attention beyond training. Now is the time to offer a broader, "performance" vision. And thus a client is born.

I know that this sounds a bit simplistic. However, the basic approach is sound. I recommend beginning with a current client who is open to this form of questioning and would be supportive to a broader, systemic vision of what is needed beyond checking off a "training box." Work with this client on a high-probability-of-success project. Then use the project success and the client to market to others. Write up the project for internal newsletters. Invite potential clients to a show and tell. Let your successful client be your advocate. If you build it and market it, they will come.

As for the skill sets, unless your current instructional people are too set in their ways, undertake a diagnosis of current competencies and examine those they require. Establish a development program for them. Without sounding too self-serving, please get a copy of our latest book Training Ain't Performance from www. astd.org. It includes a diagnostic tool and suggestions for becoming a performance consultant. The resources section offers a number of resources to help build your team. I hope this helps as a starting point.


What are some standard questions that should always be asked when speaking with a SME to ensure you find out everything you need to know?

This is a tough question in that subject matter content greatly varies. Also the nature of the subject matter expert (SME) is highly variable. Is the person someone who has deep content knowledge? Or is the SME one who has a very large repertoire of practical experience? This makes a big difference.

A content person, someone who can speak at length about a specialty area, may possess a great deal of declarative knowledge. This person can describe, explain, give reasons, and provide a highly articulate verbal presentation of a field. Examples would be lawyers, scholars and professors.

An expert who has been doing the job for a long time and automatically performs complex tasks possesses strong procedural knowledge. She or he can do the job. Examples of this might be a fine chef, a software roubleshooter, a pilot or a New York cab driver. The expertise resides in the ability to "do" something.

Each type of expert requires a different approach. Those who can "talk about" subject must be directed and probed. They will assume you understand the jargon and possess the concepts which are very familiar to them. With them, you have to stop frequently and ask for examples. You must have them illustrate what may appear to you (and novice learners) to be highly abstract. The questioning technique requires constant demands for clarification with illustrative examples. With these types of SMEs, you may be more in danger of learning too much rather than too little. You have to lay out the limit and scope of what you require. Then you must verify with other SMEs to ensure that the explanations you have received are not one-sided.

With "doing" experts, the best approach is to have them show you. As they do this, ask them to talk out loud. Constantly ask them to describe what they are doing and why they are doing it. Probe for how they make decisions. Then, if possible, play back to them what they have demonstrated and explained. Do it, even in simulation, as they watch you perform. Capture their feedback.

In all cases, once you have written up what you have gleaned from your SMEs, go back and confirm your write-ups. Have them give you feedback. Always verify your understandings.

To summarize, there is no perfect set of questions to ask. However, by preparing SMEs for what you wish to get from them, by listening, probing and continuously playing back your comprehension and by returning with a written summary of what you gathered from them, you will increase the probability that you are on solid ground. Ensure that what you have is accurate, current and complete, given your needs.


What is the difference between ability and skill?

A bit of a tricky question here as with so much of terminology. Because most of the social sciences have not nailed down specific and universally accepted definitions for all the constructs they commonly (and often loosely) use, it is difficult to come up with "official" meanings for them. The usually accepted definition for "ability" is an individual's potential to perform, given the opportunity to do so. This is in contrast to "aptitude" which is an individual's potential for performance, once they have been trained up to a specified level of ability. I have drawn this distinction from the Penguin Dictionary of Psychology, second edition (1995). "Skill," on the other hand, is the capacity to carry out complex, well organized, patterns of behavior smoothly and adaptively so as to achieve some end or goal. In its origins, skill generally was associated with psychomotor activities. Now, we use it for the verbal and social domains as well. Once again, I have relied on the same specialized dictionary to help me with this.

To make all of the above concrete, you may possess a broad range of abilities -- things you can perform, from judging when to invest to discriminating between a perfectly pitched note and one that is slightly off. You may have various aptitudes including, for example, musical ones, but if these are not fostered, they may never manifest themselves as part of your repertoire of abilities. The skills you posses do show up by the smooth way you execute them. You are "dexterous" at them -- from bowling a perfect game to singing an operatic aria without error. Addition and subtraction are skills you possess, even though you may not have great math ability and, in fact, your mathematical aptitude may be low.



In one of your articles you mentioned "not more than 10 percent of the expenditures actually result in transfer to the job." Does this mean:

  1. Transfer rate of 10% was achieved because proper analysis was conducted (including transfer strategies)?
  2. We should have invested the 90% to improving work environment factors as training was obviously not the ideal solution?

Wonderful question. There are various percentages published concerning the amount of training that "sticks" and is applied back on the job. Depending on the question posed, the research numbers vary. In general, what we see is that when training courses or programs "teach" a host of content and then later surveys are carried out to determine how much of what was taught is actually being applied back on the job, the percentages are very low. Let me give a couple of examples.

Timm Esque and Joel McCausland (Performance Improvement Quarterly, 1997, pp.116-133) investigated the transfer of a skill set that 400 managers at Intel Corp had been trained on. They asked the managers for examples of application. Approximately 20 percentsaid that they had used this skill set (Breakthrough System) in their work. However, when they went out to substantiate the reported use, they only found fourexamples (one percent). In this case, the form of transfer was percentage of persons using a skill set taught in a training program. Other studies have focused on how much of what was taught was applied on the job. Most have used self or supervisor reports as the means for measuring transfer. Time frames for doing this have varied considerably, from days to months. Brenda Cruz (Performance Improvement Quarterly, 1997, pp.83-97) has questioned this method and demonstrated that it is a poor form of verification compared with actual observation. So, I must admit that there is a great deal of confusion in what we mean by the word, "transfer" as well as how we measure it.

Despite this confusion, what we do learn from the accumulation of writings and studies is that little of what is taught in training programs gets noticeably used back on the job. Often this is due to the following: the people sent to training are not in a position to apply what they are taught (in one company we found that 30 percent of persons enrolled in training classes would never be able to use what was in the class - improper enrollment), insufficient resources back on the job, training not aligned with job reality, supervisors unaware of what is included in the training, lack of on-job support and the list goes on.

To summarize and respond to your question, which I assume really meant to ask "because proper analysis was not conducted," my answer is that so little transfer occurs due to a host of reasons, inadequate analysis among them as well as useless investment in training if, to start with, the problem was not skills and knowledge related. I also fault researchers for not being more specific in their definition of transfer to the workplace and organizations in general for either not measuring application of what the training programs intended for people to do differently back on the job, for relying on weak measures such as self-report and especially, for trying to solve with training, performance problems that are not skill and knowledge based.

You and I have a challenge here. Let's do more to change the situation. We can encourage organizations to conduct better front-end analyses, establish appropriate metrics and then systematically verify "transfer" in a variety of ways - through observation sampling and by examination of the results of the required transfer. We can also encourage researchers to be more precise in their definitions and to provide tools and methods for easier transfer verification. Until then, we will have to live with the estimates of 10 to 30 percent global numbers, meaning "not very good."


Can HPT be applied in an educational organization?

The answer to your question is, "Of course!" While HPT has grown up in the world of work, there is no reason why its fundamental principles and many of its processes cannot be applied to both educational and social settings. What happens, however, is that the educational establishment is often uncomfortable with the tough talk HPT uses. Frequently, educators hide their fuzzy thinking behind flowery and socially acceptable phrases to avoid confronting what it truly takes to achieve valued accomplishments.

Let me share with you two experiences in non-business settings where we used HPT effectively. In the first case, one of my doctoral students did a study using HPT principles to help decrease rehospitalization rates and medical emergencies and improve the quality of life in the chronically ill aged. In a second instance, a few students and I applied HPT to improving the performance of workers and volunteers in a home for the aged by getting them to apply a "Bill of Rights for the Elderly."

In educational settings, Joe Harless has been strongly advocating application of HPT. He wrote a book called the Eden Conspiracy which deals with this issue.

So, to conclude, there is no reason other than political pressure and social inertia that should inhibit applying HPT to non-business and industry sectors.


There is increasing interest from the training, consulting, and HR communities in HPT but there is also worry whether this interest is going to translate into increased practice and opportunities. Please comment.

Despite your concerns, I do see progress. Let me step my way through this slowly and carefully. First, you have to take a long-term perspective on fundamental as opposed to superficial change. Here's the distinction. TQM blew in like a storm (and now six-sigma). It was high profile for a while and then faded out. Something remains, but the terms and the specifics have largely vanished. HPT does not offer dramatic salvation . It is a field of professional practice. It is an evolutionary change from single focus approaches that are more concerned with solution than cause. It is certainly not "sexy." So we mustn't expect the same excitement and enthusiasm engendered by the other "miracle cures" that have come...and gone.

So what are the signs that HPT is healthy and growing? As I mentioned in my interview for PerformanceXpress: increased number of ISPI members; great international activity; increased number of publications; more training groups calling themselves something other than training; number of training groups trying to integrate HPT concepts into their daily routines..stumbling around very often, I admit. As I work with different organizations, I see more sophistication in their analyses and solutions than ever before. Definitely, there is heightened awareness of and interest in HPT (or as many call it, HPI).

Okay, now for the hard facts. If all of this is true, do we see changes in the corporations and lots of HPT cases? What about ROI? For the first of these two, you are not always going to see cases. Someone asks for training. A "performance consultant," so named formally or by the person's activities, does a front-end analysis (checks out the legitimacy of the request) and finds that something else is required. S/he helps the requester to see other possibilities other than training or in addition to training. A different set of interventions emerges. Nothing dramatic, but this is HPT in action. Of this, I can give you many cases done internally. Deciding to select rather than train. Developing job aids to replace or complement training. Cleaning up job expectations. Setting standards. Changing the work flow or process. This happens all the time. These are rarely documented.

About ROI. This is a problem that goes beyond HPT. In the HR field in general, in training and in HPT, we are constantly pressed to undertake the next venture before we have completed the last one. We don't take the time to conduct ROI studies/calculations. In the 2002 ASTD Industry Report, even though demand from management for ROI results in training was the number one trend, the data on what they call Level 4 evaluation shows a downward trend line over the last ten years in the percentage of organizations doing it. Why? Pressure to move on to the next thing. It ain't glamorous. "No one will believe me." "I don't know how to do it." Doing ROI calculations is not that hard. Erica and I have just finished a toolkit on front-end analysis and return on investment that will be published by Pfeiffer in April 2004. The guidelines and templates are pretty easy to use. Jack and Pat Phillps have been writing, teaching and preaching ROI for years. The guilty parties are the internal training or performance support groups. I have had the devil of a time getting my clients to spend some money and energy on ROI.

To conclude, I see lots of signs of progress for HPT. The weaknesses lie in two areas. One, lack of effort to do the ROI work. This is bad. We don't show the value of our interventions and then we complain when no one sees how good they were. The second is that we are lousy marketers. We wannabe HPT professionals. We don't take the time to define what that means, tool ourselves up and then let our clients know about our services through brochures, demonstrations, lunchtime show and tell activities, and write-ups in internal newsletters...with data.

Yes, times are slowly changing. It's up to the internal groups - with help from outside specialists, sometimes - to take the initiative to advance the cause of HPT within their organizations.


What is the difference between skill and competency?

Although often used interchangeably, there is a practical reason to recognize the difference between the two terms. "Skill," if you hunt the word in the Oxford Complete Dictionary (British) or the Random House Unabridged (American), has a host of meanings. With respect to the workplace, the key ones are: capability of accomplishing something with precision or certainty (Oxford); the ability, coming from one's knowledge, practice, aptitude, etc. to do something well (Random House). The important point is that "skill" suggests the ability to do something. If you play the guitar well or speak a foreign language fluently, you posses skills. "Competency" and its virtually synonymous cousin "competence," while close in meaning to skill contain an important nuance: "sufficiency of qualification, ability to deal adequately with a subject [bears a relationship to the word compete] (Oxford). The word competent is defined as "possessing the requisite qualifications." Random House offers "possession of required skill, knowledge, qualifications or capacity... based on meeting of certain minimum requirements" and "competent" is defined as "properly qualified."

Why are these definitions so important for us in the performance improvement field? Because there is a great deal of confusion in terminology and as a result, some fuzzy thinking and acting.

Here is what I propose. "Skill" is something you are able to do. "Competency" (or competence) is an ability to do something that is required by a job or task situation. Ability to juggle balls is a skill. A clown's job in the circus requires competency in ball-juggling. You assess a person for skills. You define a job in terms of required competencies.

Competencies are, therefore, what is required to do or perform with respect to external requirements. They state what is essential to the job.

Please don't confuse competency with performance, which is defined as behaviors that result in valued accomplishments, or characteristics, the traits or attributes people possess.

So much similar terminology and yet it is important to create a precise vocabulary in Human Performance Improvement. This is a requisite of a scientific and professional field.

For more terms and definitions, please visit the Terminology Lexicon.



When conducting a needs analysis, what are the most important questions to ask?


What you ask is not so easy to answer. A lot depends on the circumstances of your needs analysis, in other words, at what stage you are asking the questions and how much information you already possess about the specific context and performance gaps. Sometimes, also, your data comes from observation or business results. Let's start at the beginning.

If you review one of the previous questions on our Website in the Ask Harold section, you'll find that I discuss needs analysis. There are a whole host of terms used that are similar in meaning, but may vary in practice. I assume you are interested in improving workplace performance. Therefore, the first question is one you ask yourself. Am I doing this analysis proactively or reactively? If you are operating as a performance consultant within your company and have a set of clients for whom you are responsible to provide learning and performance support, your questions will focus on how well people are currently generating desired results and on what changes are anticipated that will have an impact on what people do and how they do it. These are questions you would ask of your clients and their management. You should be constantly scanning your client universe for performance gaps and changes. As you spot anomalies, you must ask questions to verify what you notice and determine the implications.

When someone comes to you with a request for training or other intervention, then the pattern of questioning is different. You must ask questions to clarify the request first. What outcomes are they seeking? Then, you probe to identify the business need that triggered the request. Your questioning must go beyond those directed to the client. Based on existing data, documents or business decisions and expert, management, customer and performer input you must establish in operational terms what the ideal state should be. Through further questioning, observation and data gathering, you have to determine what the current state is. Even if a new system is being introduced, you might ask questions about previous new system implementations and what problems occurred for performers back then as well as what parts of the previous system never got implemented. This provides clues for new implementations.

So, to conclude my response, the questions you ask depend on the nature of the needs analysis, what triggered it and how informed you already are about the performance issues. There is no magic set of questions.




Do you have any additional information on the topic of transforming performance capability?


Yes, a whole library full of material. A good start would be to join the International Society for Performance Improvement (ISPI). Their Website is www.ispi.org. Check for local chapters. Start reading their publications. Two good books for your bookshelves are: Handbook of Human Performance Technology (editors H.D. Stolovitch and E.J. Keeps). It is available through HSA's Website at book order form, through ISPI's Website at www.ispi.org or from Amazon.com at www.amazon.com. I also like Moving from Training to Performance: A Guidebook (editors Dana and Jim Robinson). It is available through the American Society for Training and Development's (ASTD) Website at www.astd.org or from Amazon.com.



I am most interested in supporting my managers in having a discussion with their employees pre and post training. Do you already have a template of what this discussion might look like?


I do not possess a template. However, I can offer recommendations. Before training, I suggest you send a note to the managers/supervisors informing/reminding them that "next week, XXXX will be attending a XX hour/day training session. Here is the rationale for the session and for XXXX's participation in attending. Here are the objectives and an outline of the session. Please schedule a 20-minute meeting with XXXX. (This could include one or several persons participating). During the meeting, discuss the following:

  1. The reason why XXXX is/are attending training.
  2. What they should anticipate during the session.
  3. How you expect them to participate during the session (which may include questions they should ask).
  4. What you expect XXXX to bring back and apply upon returning from training.

Set up a time for a post-training meeting shortly after the training has been completed to discuss more specifically application to the job.

Following the training, meet again at the pre-selected time. Debrief the training with questions such as:

  1. What key points or issues were raised during the training that is relevant to our group and your job?
  2. What did you learn that we can integrate and/or that you can apply immediately? Soon?
  3. What do you require to apply what you learned?
  4. What can I do to help you make it happen?"

The "Learning and Performance Support" group must help managers and supervisors to see the value of this, train and support them, and monitor the results. This should help you get started.



Why is it so hard to apply a few proven best practices in our work environment?

The answer is relatively simple. Implementation of a solution is far more complex. First, there is inertia in any system. We tend to perpetuate habits and frequently exhibited past behaviors. Even though new, credible information convinces us of the necessity to change, we persist with old, counterproductive patterns of behavior. A simple example: we have all received plenty of information on proper food/eating habits, nutrition and fitness. We know what we should eat and that we should exercise regularly. So, do we implement these "best practices?" Some of us do; some of us try; many of us nod our heads and then continue with what we know are inappropriate ways.

How to change this? We have to begin with the environment. Clear, valued, meaningful expectations, timely, specific feedback and consistent, persistent support are necessary. Appropriate infrastructure and adequate resources are also essential. Meaningful incentives and consequences must be put in place and administered fairly. Where skills and knowledge are lacking, training and coaching are required. In the work setting, careful selection of employees whose background and characteristics align with performance requirements increase the probability of success (e.g., they already eat healthfully and exercise regularly). And we must ensure that we truly value these new practices we desire, have confidence to implement them and have patience to support approximations toward the goal. It's a tough job, demanding commitment and discipline on the part of senior managers (not just lip service), but it can be done.


How do accelerated learning and brain-based learning relate to the emotions of fear?


The fundamental problem with accelerated learning is that it is not what one would call a well-defined construct (something that is not directly observable or objectively measurable but is assumed to exist because it gives rise to measurable phenomena). A number of people use the term toward a number of ends. It's like "learning styles." What exactly does
it encompass?

In accelerated learning, there have been many trends. Work done by the Bulgarian Psychiatrist, Lozanov, was placed in this approach with his "suggestology" and "suggestopaedia." It created a sensation in the learning of second languages but did not have staying power once scientific research investigated learning beyond the basics.

Other contributors to this movement have been the split-brain theorists. Once again, the explanations they offer have proven too simplistic from a neuro-scientific perspective. The brain is far more complex than simply dividing it along hemispheric lines. Triune brain theory, likewise does not handle neuro-scientific research very thoroughly although it borrows from this literature. Much is being made of Gardner's multiple intelligences, but how this affects learning is still unclear. Other contributors have been those involved in creativity and in cooperative learning...and the list continues.

This is what I mean by accelerated learning not being a solid scientific construct. Thus it is difficult to know what precisely we mean by it and how it works.

I recommend reading more about how the human brain (and mind) processes information and how it perceives and retains information for later retrieval. The two most fruitful areas in my opinion are in cognitive psychology and in the neuro-sciences themselves. Not easy reading, but more valid and reliable for truly understanding learning processes.

Ellen Gagne's book (along with the Yekovitches) is a good starting place for cognitive psychology. Robert Ornstein has written a lot in a fun way on the evolution of consciousness. It's easy reading and will get you started.



What is the difference between a training needs analysis and a performance analysis?

Training needs analysis is quite a tricky term. It suggests that you are out hunting for gaps that can be filled by training. I, personally, am uncomfortable with this term. Why? Because it suggests a solution in search of problems. The literature on human performance at work suggests that training, by itself, rarely is sufficient to create lasting workplace change. It can be an essential part of an integrated set of interventions, but alone, has not got the staying power to make a significant, long-term impact except in a very few instances. Read about transfer of training to the workplace and you will soon see the discouraging results.

Performance analysis, a term coined by Gilbert and Rummler, is more investigative and comprehensive. Performance analysis is a methodology that permits you to identify and analyze performance gaps, to identify the array of factors influencing the gaps and to determine the range of interventions required to eliminate the gaps.

I strongly recommend the latter approach. I, myself, like Joe Harless' term front-end analysis, mostly because of the chronological sense it offers. It's what you do up front to determine what the business requirement is, what desired and actual human performance is, what the magnitude, urgency and value of the gap is, what factors affect the gap and what interventions are appropriate, economical, feasible and acceptable organizationally and to the performers. It is a wonderful means for helping to solve human performance problems.

I recommend that you view yourself as a performance consultant more than as a training person and that you conduct performance analyses or front-end analyses. They will allow you to have a greater impact than with a training needs analysis.


We frequently hear the term "learning styles" tossed around like "I know there are tests for people to identify their learning style" and so on. People use this to argue for one delivery media over another like "My learning style prevents me from engaging in e-learning!" I would like to know if there are any scientific grounds for the existence and definition of learning styles. If so, what are they and what are the main messages coming out of this literature?

There is a lot written on learning styles and a lot more folklore circulating about it. Here are two useful articles that deal with your question. The first, McLoughlin, C. (1999). The implications of the research literature on learning styles for the design of instructional material, Australian journal of educational technology, 15 (3), 222-241, provides a good overview of the main currents and definitions of terms that are similar, yet different. These include: learning preferences, cognitive styles, personality types and aptitudes. It also examines two main learning style theoretical approaches, one that divides learners into wholist-analytic versus verbaliser imager and the other that suggests four categories: activists, reflectors, theorists and pragmatists. The second article, Stahl, S. (1999). Different strokes for different folks? A critique of learning styles. American educator, 23 (3), 27-31, questions the validity of the learning style construct itself. In the article, Stahl examines learning style inventories and questions their reliability.

My take on all of this is that there are individual differences that affect the way we learn. However, there are also many rules with respect to learning that apply to all of us as human learners. While it may be useful to factor in variations in learning approaches, it is probably more useful to apply those universal principles that research on learning consistently suggests result in higher probabilities of learning. What are they? Six simple ones:

  • When learners know why they are supposed to learn something (a rationale with a credible benefit to them), the probability of learning and retention increases.
  • When learners know what they are supposed to learn (a clear, meaningful objective), the probability of learning and retention also increases.
  • If what is to be learned is clearly structured and organized so that the learner easily sees the organization and logic, again, learning and retention probabilities increase.
  • If learners have an opportunity to actively respond and engage in the learning in meaningful ways, once again, learning and retention probabilities increase.
  • If learners receive corrective and confirming feedback with respect to responses they emit or activities in which they engage, their learning increases along with retention.
  • Finally, if the learner feels rewarded for the learning, has a sense of accomplishment or is given an external recompense for the learning that he or she values, learning and retention have an increased probability of occurring.

With respect to being more visual or auditory, while there may be significant differences among learners, more important, however, is stimulus variation.

Concerning the use of media or self-pacing, issues about learning tend to focus more on the design of the mediated or self-paced material than on the medium itself. Richard Clark has written extensively on media not being the message. If the mediated and/or self-paced material follows the universal rules and is well supported, it will work. Some learners who are not used to learning on their own may require additional support and control. Distance universities, such as the British Open University or Athabasca University in Canada, have learned how to do this well.

So, to conclude, there is a lot of folklore and some science concerning individual differences in learning. Best to apply universally sound methods to enhance learning. Vary activities to maintain interest and attention. Provide support and control mechanisms to help learners "stick with it." This way, you address all learning styles.


Several years ago I encountered a training philosophy that I liked, but I don't know how to find resources that would teach it. The philosophy is that you let people know up front what specific knowledge and skills they will be taught and expected to know. An outline is made which essentially becomes a written test, but not in the traditional academic format. The test items are written in a verbal, proactive way, such as "Describe ...", or "Relate these items.." or "Demonstrate ..." for those skills that can be observed. So the "test outline" is seen at the beginning, the trainee knows exactly what is to be learned, and that the objective is to complete it with 100%, not a lesser passing grade. The concept is that, if you are training someone in an important task, are they competent at 80% or wouldn't it be better if they were 100% on things you need them to know. Can you identify this philosophy and lead me to resources?

If I understand this question correctly, you are talking about criterion-referenced testing. You create a task analysis that focuses on what people have to be able to do as a result of the instruction, develop performance objectives based on the task analysis and then create test items that perfectly match the objectives. If as a counter person in a delicatessen, you have to be able to slice a bagel into two perfectly equal halves with smooth surfaces and no injuries, you develop a test item that requires demonstration of this precise task performance along with a checklist or observation instrument to verify performance to standard. This is also true for firefighting, analyzing poetry or performing a swan dive. Two wonderful resources are Robert Mager's book, Measuring Instructional Intent, published by ISPI and William Coscarelli and Sharon Shrock's book on criterion-referenced testing also available at www.ispi.org.


What is the difference between organizational effectiveness and human performance technology (HPT)?

All organizations want to be effective. How they define effectiveness varies, especially over time. Going back to the early part of the 20th century, management pioneers such as Frederick Taylor defined effectiveness as being the result of production maximization, cost minimization and technical excellence. Henri Fayol viewed effectiveness as the outcome of clear authority and discipline within an organization. Elton Mayo humanized the concept of organizational effectiveness. He defined it as productivity resulting from employee satisfaction.

Today, we tend to view organizational effectiveness as the ability of the organization to be effective in accomplishing its purposes, efficient in the acquisition and use of scarce resources, and a source of satisfaction to its owners, employees, customers and society. Organizational effectiveness is also concerned with the organization's ability to be adaptive to new opportunities and obstacles and capable of developing the ability of its members and of itself to meet new challenges. In the long term, organizational effectiveness is about the organization surviving in a world of evolving uncertainties.

To summarize, organizational effectiveness is the ability of an organization to fulfill its mission through a blend of sound management, strong governance and a persistent rededication to achieving results. It includes meeting organizational and stakeholder objectives - immediately and long term - as well as adapting and developing to the constantly changing business environment.

We measure organizational effectiveness in a variety of distinct yet interrelated ways: in the organization's ability to:

  • Accomplish its stated goals.
  • Acquire needed resources.
  • Satisfy all strategic constituencies.
  • Combine internal efficiencies with affective health.

Criteria for organizational effectiveness include:

  • For owners and shareholders: return on investment; earnings growth.
  • For employees: compensation; benefits; job satisfaction.
  • For creditors: satisfaction with debt payments.
  • For unions: satisfaction with competitive wages and benefits; satisfaction with working conditions; fairness in bargaining.
  • For local communities: involvement in and support of local affairs; environmental respect.
  • For government agencies: compliance with laws; avoidance of penalties.

Professionals in human resources and organizational development are most directly concerned with helping to facilitate organizational effectiveness. They generally assist in the following:

  • Defining organizational direction.
  • Creating appropriate organizational structures.
  • Defining required organizational skills (generally non-technical).
  • Defining organizational processes.
  • Improving communication and building trust.

Hence those involved in organizational effectiveness focus on the current and future overall direction, functioning and health of the organization.

Human Performance Technology (HPT), as its name suggests, is a "technology," defined as the application of scientific and organized knowledge to achieve practical outcomes. Its focus, within the organizational context, is the achievement of desired performance from people. HPT tends to deal with concrete, specific performance gaps. While organizational effectiveness is a desired state, HPT is a field of study and practice. HPT practitioners are professionals who work with organizational clients to identify gaps between desired and actual performance states whenever these occur (technical as well as non-technical) and then measure these gaps in terms of magnitude, value and urgency. They identify factors affecting the gap and recommend interventions to eliminate these gaps. In many instances, they design, develop, help implement, and monitor and maintain performance interventions.

HPT provides processes, tools, principles and clearly defined methods for operating at every stage and step of performance improvement. The results of applying HPT are verifiable through concrete measures.

To conclude, organizational effectiveness is about the overall functioning of an organization. HPT is about engineering effective human performance in specific ways. There is an obvious link. To achieve organizational effectiveness requires the efforts of many managers and professionals. As gaps are identified, the HPT professional can play an extremely valuable role in investigating these and creating solutions. HPT is one of the key professional means for achieving organizational effectiveness.



What is the difference between Organizational Development and Human Performance Improvement?

This is a frequently asked question and understandably so. Both fields are aimed at improving the performance of people and organizations. However, there is considerable difference in style and practice between the two. Organizational Development (OD) adopts a very broad-brush approach to overall organizational performance improvement. It generally operates at a macro-level. It is most frequently employed when there is anticipated or actual, significant organizational change. Human Performance Technology (HPT) is more of an engineering discipline. It focuses on identified gaps in human performance and seeks to eliminate (or at least reduce) these. Here are a few definitions of OD:

OD is the field of study and practice that focuses on various aspects of organizational life, aspects that include culture, values, systems and behavior. The goal of OD is to increase organizational effectiveness and organizational health, through planned interventions in the organization's processes or operations. Most often, OD services are requested when an organization (or a unit within an organization) is undergoing a process of change. OD is a planned effort to improve an organizational unit and the people who comprise it through the use of a consultant and sets of structured activities dealing with organizational health and effectiveness.

OD is a distinct consulting method that focuses on the people, culture, processes and structure of the whole organization. A primary goal of organizational development is to optimize the system by ensuring system elements are harmonious and congruent. Performance suffers when structure, strategy, culture and processes are misaligned.

Compare these to the following typical definitions of HPT:

HPT is a set of methods and processes for solving problems or realizing opportunities related to the performance of people. It may be applied to individuals, small groups or large organizations.

HPT is an engineering approach to attaining desired accomplishments from human performers by determining gaps in performance and designing cost-effective and efficient interventions.

HPT is a field of study and professional practice aimed at engineering accomplishments from human performers. HP technologists adopt a systems view of performance gaps, systematically analyze both gaps and systems, and design cost-effective and cost-efficient interventions that are based on the analysis of data, scientific knowledge, and documented precedents, in order to close the gaps in ways all stakeholders value.

Note that both are concerned with improving performance. They are in many ways complementary. However, the words and style of the definitions immediately reflect the fundamental differences between the two approaches. OD deals with the overall health of the organization. It comes into play when the organization is not functioning properly or will have to alter its way of operating. HPT focuses on identifying and closing specific performance gaps. It springs into action when specific results are not being achieved. OD emphasizes organizational effectiveness. It is generally called upon when a major change affecting the entire organization is anticipated or occurring. It relies on consultant intervention, bringing together all the affected players and groups, and facilitating communication and decision-making. HPT is applicable to any human performance gap, whether these result from change or from ineffective practices, improperly aligned incentives and consequences, lack of appropriate skills and knowledge or other environmental variables that either act as obstacles or facilitators of performance. HPT's measures are narrowly focused and usually deal with cost-effectiveness, efficiency and productivity. HPT operationally defines "performance" in terms of efficient behaviors that produce valued, verifiable accomplishments.

To summarize, OD is a field of practice aimed at analyzing the functioning of an organization, facilitating change and, through its process consulting capabilities, bringing disparate elements of the organization into alignment, generally around planned change. HPT is also a field of practice. However, its concern is with improving verifiable performance through people. Its starting point is any business need, from improving nuts and bolts production or reducing wastage and scrap, to improved processes and productivity. It is bottom-line focused and not consultant dependent. If producing a simple, printed job aid achieves desired behavior and accomplishments, it has been successful.

My personal sense is that HPT is closely associated with hard core measures directly linked to specific performance interventions that produce observable, measurable results. OD seems to me to be characterized by facilitation and communication practices. OD frequently employs surveys in its professional work. Its aim is the smooth, coordinated and aligned functioning of the organization. HPT is mostly results data driven and far more directive. Each has something to offer the other. From personal experience as an HPT practitioner, I have found that working with OD specialists has been very helpful. HPT's front-end analysis and systematic design and development methods resonate well with them. I have certainly profited from their consulting skills and methods for building consensus and coordination around performance improvement goals.


Would you have any advice for new consultants to the field? What skills, aptitudes and attitudes do you think are important to succeed in today's competitive marketplace?

Chapter 32 in the second edition of the Handbook of Human Performance Technology (Harold D. Stolovitch & Erica J. Keeps, eds.) focuses on skill sets, characteristics and values of the human performance technologist. It provides an inventory of valued skills organized by various categories (technical, people, management). The chapter also summarizes advice from a number of successful HPT practitioners and covers skills, aptitudes and attitudes. Chapter 35 in the same volume provides an excellent set of strategies for survival for the HPT professional. I strongly recommend starting with these.

From my own experience and also through speaking with HPT practitioners whom I respect, I have derived the following recommendations for consultants starting out:

1. Assuming you have undertaken formal training, usually through a master's or doctoral program in Human Resource Development, Organizational Development, Business, Educational Technology, Human Performance Technology or related fields, find a mentor or mentors whom you admire as early as possible. Work with the mentor/s on projects, no matter what the role or remuneration is. Be an apprentice. Discuss; debrief; learn!

2. Attend professional meetings that bring together people in the field. Listen, question and discover what they are doing. Probe what their joys and sorrows are. Learn vicariously from them. If they generate ideas that appear attractive and relevant to you, test these out - even in informal ways with volunteer groups or associations to which you belong.

3. Volunteer. Participate as an active member in your local International Society for Performance Improvement (ISPI) or American Society for Training and Development (ASTD) chapter. Look for professional association projects on which interesting people are working. Seize the opportunity to join with these professionals.

4. Read professionally. Seek out good cases in performance improvement, use of technology for learning and performance support, measurement and evaluation, return on investment and other important facets of our field. Examine and analyze them. Imagine what you would have done in these instances and play out your solutions in your mind.

5. Contribute. Participate in a project that forces you to stretch and grow. Help organize a professional meeting. Present something you did for scrutiny by your peers. Become active professionally to learn and to get your name out there as a person who is active and capable.

6. Find kindred spirits who share the flame and with whom you are comfortable to discuss your ambitions, desires, successes and failures. Support groups that understand what you want to achieve and will not judge you are great for helping you grow. They will test your ideas with you as well as provide you with their experiences and reactions and act as a reality check when your ambitions appear to overreach your competencies and experience.

7. Reflect. It's wonderful to act, but it's also critical that you reflect on your own actions. Examine as objectively as possible what you are doing or what you have accomplished. Assess what worked and what didn't. Plan your next steps in light of these reflections.

8. Be patient. Two steps forward, one step back. You have successes, you move ahead and then all of a sudden things come to a standstill or even regress. In my years of working in the field, I have had euphoric moments when things were going well and then, all of a sudden, problems and frustrations have emerged. We are HUMAN performance technologists. Our field is young, our science imperfect and our goal - to change human behavior and accomplishments - outrageously ambitious.

To summarize, work hard. Learn all the time. Stretch. Share. Pay your dues. Have fun!



To what extent do we, as instructional designers and human performance technologists, need to immerse ourselves in the contexts of our client's business? In other words, must we acquire or possess in-depth subject knowledge of their field, or is it sufficient to be merely familiar with their content?

Your question has implications at two levels. There are internal instructional designers/performance consultants and external ones. The internal ones generally have a deeper sense and firmer grasp of the client context (although occasionally, I have found that this is not true). The external ones, especially those new to a client organization, are definitely at a disadvantage. However, experienced professionals in our field generally learn quickly. If you are not a quick study, you can't survive in this business.

In the type of work instructional designers and performance practitioners do, we essentially get paid for two things: our knowledge of how to identify desired and actual states of knowledge or performance and determine what it will take to get from here to there; our ignorance of the subject-matter and our ability to ask the right types of questions to extract from subject-matter experts (SMEs) what novices require. We are the advocates of the learners/novice performers.

Sometimes the subject-matter area is very complex and the targeted learners already have a large repertoire of prior knoweldge and experience. This type of situation often calls for a designer/consultant who has some background in the content area. In highly technical contexts, a strong technical background helps in being able to "understand" the content area -- in being able to ask the right questions. This is also true of contexts where awareness of cultural norms different from those of the designer/consultant also may have an important effect on both analysis and design. Areas of high risk usually demand experience and background in the nature of the work and the contextual constraints.

So, to the bottom line. For most contexts, an excellent designer or consultant, with experience in a broad array of projects usually does not require content knowledge so long as there are very competent SMEs with whom to consult. The experienced professional will be able to draw out the critical content and then validate it with other qualifed SMEs. Where there are unique circumstances with very specific content that is not readily understood by someone who has no base of knowledge and experience in this domain, some expertise may be required.

We pride ourselves on our ability to engineer learning and performance. Experts, as the knowledge engineering world informs us, do not really know what they do or how they do it (unconscious competence). Like the knowledge
engineer, we draw out the required knowledge from the expert and reframe it in terms of learning logic for our less knowledgable performers. This is our expertise. We create learning and performance systems that help our target groups achieve what the organization desires. This, too, is our expertise. We make learning and performance happen based on what others know. That is our capability. Strangely enough, we sometimes gain so much expertise through our work on a client project that we end up with high levels of competence ourselves!

To conclude, we do have to understand our clients' business and what it is they wish to achieve through our interventions. We do not, however, have to be or become subject-matter specialists. That's why there are SMEs who provide expertise and us folk who extract this expertise and transform it into learning and performance outcomes in others. That, in itself, is quite the challenge!



What are the best most popular needs assessment (competency-based or performance-based) models?

Why can't the question ever be simple? Kidding aside, "best " and "most popular" are not synonymous terms. Joe Harless, back in 1975 created an excellent "Front-End Analysis" methodology, which he has improved on over the years. However, his approach is so demanding, that in today's world of hyper-speed, there are not many who are willing to invest the time, resources and effort to do it the Harless way. It is an excellent system.

Roger Kaufman has written literally hundreds of articles and many books about "needs assessment". His approach is both popular and good. His more recent works include a model that focuses on societal ends and the means for achieving valued outcomes at the "mega" level. It, too, is demanding - as it should be.

Allison Rossett has written two books on the subject. Her "Training Needs Assessment" (1987) is very good about methods for collecting data and determining gaps. However, as its title suggests, the solution that is highlighted is training. Her more recent book "First Things Fast" is very pragmatic. It deals with identifying needs more speedily than other models. Of course, something's gotta give. The outcomes are less rigorously attained than with Harless' and Kaufman's models.

One of the all time favorites is Bob Mager's and Peter Pipes's small but very popular volume (revised in 1997), "Analyzing Performance Problems". It is down-to-earth, sensible and easily applied. It's up to you as to how much rigor you put into its application.

There are many other approaches. Gary Rummler and Alan Brache wrote "Improving Performance: Managing the white space in the organization chart." It is very comprehensive and well laid out. It examines and identifies gaps at the organizational, process and individual level. Danny Langdon has his approach in "The Language of Work" that examines work, worker, workplace.

Well, I've named known ones. All of these can be helpful to you. Let me conclude by saying that I, too, have a successful approach. We use it in our work. We have borrowed the term "front-end analysis" from Joe Harless. We haven't published a book on it, but do teach it in seminars in-house for clients. It is pragmatic, rigorous and identifies performance gaps based on business needs. It also identifies gap causes and suitable interventions.

So what's the bottom line? Many approaches, all useful, some more rapid, some more rigorous. All of the ones I have described are performance-based. Beware competency-based approaches. These begin with an assumption that competencies will lead to performance results. Dangerous assumption! As almost all the models suggest, being able to perform does not necessarily translate into performance. The environment often has far more to do with performance outcomes than the abilities of people.


De plus en plus, on essaie d'homogénéiser les clientèles en formation, donc de regrouper les mêmes niveaux d'habiletés, capacités d'apprentissage au lieu de diversifier la clientèle (apprentissage rapide, régulier ou lent dans des groupes séparés); qu'en pensez-vous ?

Il faut tenir compte de deux types de considérations lorsque nous devons prendre ce genre de décision : l'efficacité de l'apprentissage et son efficience. Je m'explique. Nous avons un groupe de personnes possédant des caractéristiques très variées, ce qui est presque toujours le cas. Est-ce mieux de les garder ensemble et d'exploiter leurs différences pour enrichir l'expérience de l'apprentissage? Ou est-ce préférable de les séparer et de les laisser apprendre à leur rythme? Les recherches sur cette question ne sont pas concluantes. La création de groupes mixtes pour des activités favorisant l'enseignement mutuel comporte des avantages. La personne qui apprend plus rapidement la matière ou qui possède des connaissances et (ou) une expérience plus étendues les partage avec ceux qui sont moins avancés. Cette forme d'apprentissage sous forme d'échanges entre les membres du groupe se révèle très efficace. Si nous proposons des défis aux personnes plus avancées, nous entretenons alors leur intérêt et nous les valorisons. Elles apprécient cette démarche qui stimule leur apprentissage. Il est évident qu'il faut concevoir une formation visant à exploiter la diversité des niveaux de connaissances du groupe. Une formation qui se conformerait au rythme des apprenants les plus lents serait désastreuse pour les personnes plus rapides ou plus expérimentées.

Par ailleurs, nous constatons qu'une formation adaptée aux niveaux de connaissances et (ou) d'habiletés peut s'avérer aussi très efficace. Les raisons en sont assez évidentes. Chaque groupe suit son propre rythme (rapide, moyen, lent). Nous perdons quelque peu de la richesse qu'offre la diversité des niveaux représentée dans l'autre modèle, mais les apprenants sont moins frustrés.

Voilà pour ce qui est de l'efficacité. Il faut aussi tenir compte des contraintes pratiques. Ainsi, un nombre insuffisant d'apprenants pour la formation de groupes distincts, la nécessité d'un mode d'apprentissage identique, un manque de ressources pour la conception de cours de formation divers ou encore un nombre insuffisant de formateurs pour des groupes ayant des horaires différents sont autant de facteurs qui favorisent la création de groupes homogènes. D'autre part, la création de groupes distincts permet de réaliser des gains de temps. Le salaire des apprenants représente la portion la plus élevée des coûts de formation. Si nous pouvions réduire la période de formation pour les personnes dont les besoins en apprentissage sont moins grands, ce serait beaucoup mieux! Voilà ce qui répond en partie à la question de l'efficience.

Ces arguments favorisent la création de programmes d'auto-apprentissage dans les cas où la formation en groupe ne présente aucun avantage.

Chaque situation est unique. Il faut en évaluer tous les aspects.

Mais voici un autre point important. Si l'écart entre les niveaux des apprenants est trop grand, il devient impossible de mettre sur pied une activité de formation pouvant répondre aux besoins précis de chacun. Il est bon qu'il y ait une certaine diversité. Cependant, si des personnes ne possèdent pas les connaissances préalables nécessaires alors que d'autres connaissent déjà la matière ou encore si les niveaux de connaissances sont trop différents, l'activité de formation devient inutile.

Voilà en bref mes idées et mes opinions. J'espère qu'elles vous éclaireront.



What do you think the future holds for training professionals with respect to Reusable Learning Objects?

"Reusability" has become a major buzzword over the past two years. The word "reusable" is now part of a new vocabulary that includes reusable information objects, reusable learning objects, reusable nuggets, etc. Let's examine the problem that generated the concept and these terms.

There is a great demand to get both information and learning opportunities to widely dispersed populations with immediate needs. In a global organization, disseminating the right content to the right people at the right time in the right amount has become a high priority requirement. Logically, if we can create content and place it in an information repository in discrete chunks each time we develop information packages and/or learning programs, we can then go into this repository, withdraw relevant chunks for a particular group or individual, combine these into coherent packets and deliver them to anyone, anywhere, at any time.

Great idea. Now for the reality. Each chunk of information or learning must be properly identified (tagged) so that it can be easily located and retrieved. Each has to be in a form and contain information that is written in a manner that is relevant to the recipient. If you require a combination of these chunks, they have to have "compatibility", i.e. have to fit together and make sense in combination.

While this can and is being done, I believe that there are some false expectations being promulgated by proponents of reusability. I could go on at length, but let me state a few here:

  1. There is an assumption that you can create "neutral" chunks of information or learning -- ones that can be used for a variety of recipients. This is a false notion. These are not objects as in object oriented programming or like letters in an alphabet. A piece of writing takes its meaning with respect to an intended recipient, and within a context of what has come before and what will come after. Using linguistic jargon, each chunk has a connotative as well as a denotative meaning. The more sanitized the individual chunk is -- for reusability purposes -- the less meaningful it is for any person or context.

    You can reuse photos, brief descriptions of things, definitions, etc. But if you write a technical piece of content, you do it with some image of the recipient in mind. Is it a salesperson, an end user, a systems engineer or a journalist? The bottom line is that content reusability is highly ify. You have to make it usable before you go for reusable. And when you do this, you may be reducing its reusability. Quite a paradox!

  2. Imagine your closet as being a repository of clothing chunks/objects. You want a particular look and feel today. Can you pull a sleeve from one item of clothing, a pocket from another and a cuff from a third? What about color, texture, fashion, season, taste...?

    What you create may end up looking like a clown's costume. When you put together chunks of information or learning you may achieve a similar result.

  3. All right. Perhaps I'm setting up a straw man. We can create varying outfits from combinations of blouses, slacks and belts. However, notice what is occurring, the chunk sizes have increased. We still require compatibility. And a range of sizes. So, too, with IOs and LOs (which I'm sure you now know what these mean). Rational fit, just because they have the right content, is not very likely if they were created for different purposes and contexts and don't match. Once again, if you force everyone to create each chunk to a prespecified template and to be recipient neutral, you are likely to get relatively meaningless combinations that will be of use to no one.

  4. Where reusability is in highest demand is where there tends to be the highest volatility of content. Product information, competitive information, innovative technology, new systems, new procedures... No sooner have you created the content than it changes. This means that stored chunks can become obsolete rapidly. As information tends to multiply at greatspeed, we soon have huge information repositories filled with aging information. So we have to vet the information. This requires effort and resources. My experience is that maintenance of information and instruction is often neglected. Will we be inclined to do it better in the reusability age?

I am not a cynical person. However I am skeptical. I still have not seen viable reusability systems except in limited contexts. What I believe is far more fruitful with respect to reusability is the following:

  • Creation of reusable instructional templates into which new content can be poured,but which are instructionally sound for building learning.
  • Creation of reusable task analysis templates that can be applied to new systems that require similar types of procedural and declarative learning.
  • Design and development of reusable learning exercises that are adaptable to a wide variety of content and learners.
  • Design and development of job aid templates that adapt to a broad range of performances.
  • Creation of delivery and presentation formats with demonstrated effectiveness that allow new information to be poured into them.

I believe that the key to reusability is not in the reuse of content, although there is a limited place for this. Valuable reusability comes from the reuse of well designed analytical, instructional, presentation, delivery and evaluation frameworks that have generalizability over a broad range of content, tasks and learners. These require deep knowledge of analysis and design to create.

Once these have been created and tested, others who are less knowledgeable can then reuse them.


How does one objectively measure a candidate's personality in an interview? How important is the personality aspect in a candidate vis a vis his technical skills in today's competitive scenario?

First, let me point out that this is not my area of expertise. There are specialists who work on this and who have derived/created inventories and protocols to assess personality traits. Having made this disclaimer, I personally do not feel that they capture the "mystical" qualities that we so often look for in job candidates. I'm not trying to be mysterious here, but each organization has its own culture and personality. Candidates with compatible or complementary personality traits either fit better or help an organization grow. Improper fits can be highly destructive.

I am a fan and advocate of performance-based assessment and testing. I have found that it is a better predictor of future performance capability than psychometric tests. Consequently, my recommendation is to place the candidate in circumstances similar to those s/he will experience on the job and observe attitudes and behaviors. Assessment centers do this. They are expensive, however. You might try scenarios and/or role plays, execution of tasks under time or stress conditions or at the least, case studies that raise affective as well as job performance dimensions ("So what would you do now, when this occurs?"). In addition, if you have the time, going for a refreshment break and chatting informally with the candidate and having her/him interact with various persons in the organization are also helpful.

On the issue of importance of personality traits, a great deal depends on the nature of the job. If the person will be working alone and will only be performing technical duties, personality is less critical. In today's work environment, this is highly unlikely. Team work is a big part of what we do in modern organizations. We interact frequently with a variety of players. Studies on team efforts show that the success and failure of teams are more tied to how well they work together rather than technical competencies. Hammer and Champy said it well in their first book on Reengineering: you hire for characteristics; you train for competencies. This is an oversimplification. Baseline competencies are necessary for many jobs. However, how a person is and the person's deeply held values are extremely important. In organizational settings, it is virtually impossible to alter these. Incompatible personality traits can tear apart a team...a group...an entire enterprise. Think of someone you know who was fired recently. Was it due to lack of technical skills or because of the disruption, work ethic, etc.? The latter reason is more frequently the case.

To conclude. Watch carefully whom you hire or promote. Establish what I call "fatal flaw" characteristics criteria -- those which could lead to disaster. Test for these in a performance-based manner. However, do not use this as a means for discriminating against those who are different. The best hire is one who possesses the right skills and whose personality, however different, will enrich your organization. Vive la difference!


What is meant by a "blended solution"? Must it include a high tech component?

Here is a brief definition of "blended solution": Intelligent intervention combinations, seamlessly integrated to produce desired learning and performance outcomes.

Blended solutions do not require high tech components although in this modern age, they are frequently included. Here is an example of a blended solution without any high tech elements. To help sellers of monthly passes at public transit ticket offices rapidly and inexpensively calculate the price of multiple passes for adults, seniors and school children, we created a simple, paper based, laminated job aid and blended it with training on the use of the job aid, guided on-job practice and a performance feedback system to each seller - a printed graph -- that informed him/her daily of speed, accuracy and customer satisfaction ratings. Speed of sales jumped 240%, error rates decreased from 3.5% to less than 0.5% and customer satisfaction increased by 30%. All was accomplished by a seamless blending of all components with no high tech at all.



What is the difference between front-end analysis, needs assessment, and needs analysis? What can I say to my management to persuade them to invest in front-end analysis for my projects?

Very often , people get hung up on terms or find a nuance that sets their "weitanschauung" (view of the world) apart. It gives them a sense of uniqueness. All these terms plus others others, such as performance analysis and gap analysis, refer to the initial analytic effort to identify and characterize gaps between desired and current or actual states.

Front-end analysis is a term and methodology created by Joe Harless. It is a rigorous set of activities, which when fully applied, gathers data that permits identification of desired and actual states, includes clear indications of what the nature of the gap is and identifies potential solutions -- both instructional and non-instructional -- for closing the gap. Many professionals include target population analysis, context analysis and job/task analysis as part of the front-end set of activities. I personally do not. I prefer making a distinction between the true front end work and the subsequent target/context/task/concept analyses. I do this so as not to prejudice the front-end analysis results. I also include, as Joe Harless does, worth analysis, which calculates the extent to which it is worth closing the gap.

Needs assessment is often associated with Roger Kaufman of Florida State University. He, too, has a gap model that looks somewhat like this:

You begin by verifying if there is a gap between desired and actual outcome of your system, if yes, you backward chain to identify gaps at the outputs, products, etc. stages. This allows you to clearly display all the gaps in the system and trace backwards to initial causes. Needs assessment helps to identify where gaps exist.

Needs analysis is a sort of generic term for identifying gaps between ideal and actual circumstances. "Need" is only acceptable as a noun and is defined as the gap between ideal and actual states. There are a number of authors who have written about needs analysis and who offer a plethora of models.

The bottom line is that each term carries its own history. My preference is front-end analysis and I have created my own approach and methodology.

"Chaqu'un à son goût."

For more information see chapter 8, Analysis for Human Performance Technology by Allison Rossett in the Handbook of Human Performance Technology (Stolovitch & Keeps, Jossey-Bass Pfeiffer, 1999).




Why don't men listen?

Although this sounds like a somewhat facetious and "typical woman complaint" (from the male point of view, of course), I am delighted to respond. I would like to do this in two ways and then draw both together in terms of implications for learning and performance.

Neuroscientific research is making enormous strides in helping us unlock the mysteries of how animals and - importantly to us - humans process information. New brain research released in November 2000 suggests strongly that men and women do listen differently. Researchers at Indiana University School of Medicine have found through functional magnetic resonance imaging (FMRI) that there are fundamental biological differences in the way men and women treat auditory, verbal information.

He Heard, She Heard
Research released recently shows that men (top scans) listen with one side of their brains, while women (bottom) use both sides. Radiologists at Indiana University School of Medicine used functional magnetic resonance imaging to take pictures of neural activity as volunteers listened to someone read aloud. Areas of intense neural activity are highlighted in red and yellow.


Source: Indiana University School of Medicine

Men/Women Brain Scan

The Indiana study follows one conducted at Yale in 1995 by researchers Sally and Bennett Shaywitz who discovered that women use both hemispheres when they read or engage in verbal tasks compared with males who only draw on specific areas of the left hemisphere. There is a suggestion that this may explain why girls speak sooner, read more easily and have fewer learning disorders. Women who have had strokes often recover their speech more quickly than men - perhaps because they can draw more readily from both sides of the brain.

Social conditioning may also provide some clues. Watch boys playing in a schoolyard. They are far more aggressive than girls and tend to make sounds and noises whereas girls appear to engage in more socially cooperative activities involving a great deal of language.

While this research suggests that men do listen - or at least treat aural information - differently from women, there are still details that haven't been answered about why this is so. Dr. Roger Gorski, UCLA neurologist, states "…sex differences do exist in the brain but the full significance of this no one really knows." (Interview with the Los Angeles Times, November 29, 2000.)

Aside from the biological differences, there is another mechanism - this time a psychological one - that may come into play with respect to gender differences in listening. Both sexes engage in what is known as "perceptual defense" (see no evil, hear no evil…). We all shun or block out unpleasant information at various times. Research has shown that in general, women use more words than men. Up to 25% more. In male-female relationships, many of the words, particularly if they are viewed as unpleasant, demanding or threatening, may be filtered out. Similarly, if the words are viewed as unimportant, they may simply not be "heard."

While this is true for all humans, less use of words coupled with socially generated gender distinctions as to what constitutes "unpleasant, demanding, threatening or unimportant" may result in men not listening to what is not to their liking or not in line with their priorities. Children do this frequently. So do learners and workers when what is being said is not what they want to hear. This can obviously have an important impact on workplace performance. Threatening, unpleasant or unwelcome information may be filtered out with negative performance consequences.

So to conclude, biology and psychology offer intriguing answers to age old questions about male-female differences. Both may play significant roles in explaining "why men don't listen."



Can you develop interactive, online learning teaching learners to use a software product when the software is being developed concurrently? If yes, how is this done to ensure cost containment and efficient use of resources?

Tough question. This is a problem that is not online specific. The question really relates to the concurrent development of learning and performance support (interactive and online or not) solutions while products or processes are also in development. This is a challenging problem that has been plaguing learning and performance support professionals for a long time. The short answer to your question is, "Yes." However, certain conditions have to be met.

  1. Cost containment under these conditions is not easy. Build into your contract with the client a set of mutually acceptable assumptions. Deviation from these increase costs. Signal these to the client as soon as they arise.
  2. The client has to provide clear, documented specifications -- the best available -- with reasonably accurate descriptions of functionality. If the software/product/process is an upgrade from a current version, the documentation should include key differences information. Pictures, diagrams, screen displays and other visual materials are also most helpful. The better and more complete these are, the less the uncertainty and hence, cost.
  3. The client must provide access to true subject-matter experts. These are probably not very available, so it is the learning/performance support specialist's responsibility to set up specific times for Q & A sessions. From an efficiency perspective, it is essential to establish specific review times to verify the task analysis, design document and prototype versions of the training/performance support materials. Stick to these. Use e-mail to reduce time pressure on subject-matter expert time.
  4. There must be a clear understanding between client and learning/performance support developers that early versions of the materials (online or otherwise) will contain gaps that will be filled in as the software/product/process stabilizes. Specify who is to fill in which type of gap. Make this person the final authority. Too many cooks...add to time and cost.

Some tricks I have used in the past are:

  • Videotape the expert as s/he walks you through the software, simulating how it works, drawing screen displays and diagrams as s/he speaks and describing the novel features of what is being developed. Ask questions and record the answers on the videotape. This saves a lot of time as it can be reviewed by all learning/performance support developers. During the videotaping have someone act out the learner role and record the subject-matter expert's instruction. Verify for accuracy.
  • With subject-matter experts, create a complete task analysis as though the software/product/process is complete and stable. Verify weekly to identify updates with the appropriate individual/s.
  • Develop the learning/performance support materials modularly and as if the system were stable. Leave gaps where decisions have not yet been made. The Manhattan Project to create the atomic bomb was conducted this way. Developers skipped over unsolved problems and continued as though they had been resolved, then went back and revised as new information emerged.
  • If the software/product/process developers indicate that there is more than one path they may take at some point, obtain information on each of the paths and build training or performance support for each alternative. Be sure to budget time and money to cover this "multiple development".
  • Finally, have the courage to leave gaps. Where information is not available, leave a box with instructions to subject-matter experts or reviewers to fill in these gaps. Specify the appropriate expert.

Learning and performance support development is an iterative process. When content is still being developed, the iterative loops require that more and more accurate information be provided with each cycle. Interestingly, online development in many ways simplifies this painful process. It is easier to change content online than to redo print and video materials. Just be sure that you size up the degree of uncertainty of the software/product/process and build a large contingency into the budget to cover the rework and revision cycles. I recommend a contingency of 50% or greater, depending on how far the development has gone. And remember, rework takes time. The greater the time pressure, the higher the contingency budget. Contain costs and maintain efficiencies through tight project management, clear communication and specificity of questions, requests and responsibilities.


What is the difference between a performance, an accomplishment, a competency and a skill?

Excellent question as many people confuse these terms or use them incorrectly. This has severe consequences for organizations that are engaged in competency modeling when they have not clearly defined what they mean by this. Let me add two more words to this list, characteristic and value.

In brief, performance is a term that contains two major components (as Tom Gilbert has so clearly explained in his book on Human Competence): behavior and accomplishment. Performance results from persons doing something and achieving something. You work out in a gym (behavior) you achieve a "buff" look (accomplishment). Behavior is the cost -- the effort expended to attain the accomplishment. Accomplishment is the benefit -- the desired result or valued outcome. In a performance-based system, you must account for both components. I recommend that organizations focus on performance modeling rather than competency modeling. Performance modeling specifies what must be done and what the measurable result must be. It is rigorous and meaningful in business terms.

A skill is an ability to do something. Juggling is a skill. When you assess individuals, you identify skills they possess. These can take many forms: psycho-motor, verbal, artistic, analytical... A competency is an ability that is required to achieve something required by the environment -- usually for a job. One assesses people to identify skills. One studies the job to define competencies. In other words, you analyze a job to determine competency requirements. You analyze people to determine if their skills match the competency requirements.

Do not confuse competencies with characteristics. We hire for characteristics -- those traits which a person exhibits as part of "who they are". In the work setting, it is virtually impossible to alter a person's characteristics. We train for competencies -- those abilities required to do the job.

Finally, values are ethical belief systems people possess. These guide their judgments and, often, their life priorities. While we can influence values, these are difficult to alter.

Putting it all together. I recommend that organizations analyze jobs in terms of the performance requirements (behavior + accomplishment), identify the competencies needed to perform, hire/assign persons to the job who possess the appropriate characteristics for the job and whose values are in alignment with the organizational culture.


Is e-learning all it's cracked up to be?

Elliot Masie has suggested that the "e" in e-learning is more appropriate for "enthusiasm" than electronic. I'm with him in this. Whenever an innovation hits the learning world -- video, multi-image, CBT, intelligent tutoring systems, CD-ROM, CBT -- there is a wild rush to exploit what appears finally to be the philosopher's stone. The content lead will miraculously be transformed to learning gold.

Let's get real. One of the few research constants in learning and media is that the medium is not the message. The active ingredients for learning that lead to performance are in the instructional design. Electronic delivery mechanisms can increase flexibility, speed, consistency, access, communication and many other efficiency features of instruction. They cannot transform the effectiveness of the learning.

The value of e-learning is potentially great. It can offer a one stop shopping portal (or hub) through which learners can enter and be guided to the appropriate resources for learning. The e-learning environment can simulate a classroom, a library or a lab. Excellent e-learning can offer guidance, registration, stimulus variation, self-paced learning, interactivity with media or a variety of individuals and groups. It can provide testing, feedback, performance support and so many other features. However, it all depends on the design.

To conclude, there is the hype, the potential and the reality. The hype is loud to the point of being obnoxious. Companies and individuals out to exploit a new means for becoming rich have everything to gain by building promises and excitement. Remember, if it sounds too good to be true... The potential is truly great for creating a stimulating and responsive environment with awesome accessibility to a world of learning from directed to exploratory. The reality is that there are still too few examples of excellent e-learning and very little research demonstrating learning gains due to this vehicle.

My advice is to listen, look and learn about e-learning, but keep yourself firmly grounded. Caveat emptor...let the buyer beware!


What is performance consulting?

Performance consulting is the set of professional activities one engages in to identify gaps in performance -- either opportunities or problems -- that affect an organization's results. The performance consulting process consists of a series of steps that include

  • Identifying business goals that are not currently being met, either proactively by scanning the organization and client groups, or reactively by responding to requests for assistance.
  • Specifying, through active listening and research, what the desired performance requirements are.
  • Determining, if relevant, current levels of performance.
  • Describing the magnitude, value and urgency of the gap between desired and current states.
  • Identifying the factors that affect the gap.
  • Identifying potential means for attaining desired performance.
  • Recommending a basket of performance interventions that are appropriate, economical, feasible and acceptable to the organization and affected stakeholders.

Performance consultants play many roles. They act as account managers to the client throughout the life of a project, guides to the client in decision-making and acting, representatives of the client to the team that develops the interventions and as advocates to the client for the development team.

The key focus of performance consulting is the attainment of desired organizational results through people. The main body of professional knowledge that supports the performance consultant is Human Performance Technology.


What is the difference between Human Performance Technology (HPT) and Human Performance Improvement (HPI)?

Human Performance Improvement is the goal. Human Performance Technology is the means for achieving the goal. HPT is a recognized body of professional knowledge and skills whose aim is the engineering of systems that result in accomplishments that the organization and all stakeholders value.

Many people talk about HPI in loose terms and confuse ends and means. HPT is a disciplined professional field that is systemic in its vision and approach, systematic in its conduct, scientific in its foundation, open to all forms of intervention and focused on achieving valued, verifiable results.


How do you calculate Return on Investment (ROI) in training or human performance improvement interventions?

The best answer is to direct you to two locations within this Web site. In our Performance newsletter, Winter/Spring 1999, there is a brief article on calculating ROI. Our Publications section contains a more detailed article, Calculating the return on investment in training: a critical analysis and case study.

For more information, refer to Jack Phillip's book, Return on investment in training and performance projects, 1997, Houston, Texas: Gulf Publishing. It can be purchased through the ISPI bookshop at www.ispi.org.


Is ISD outdated?

No! Instructional Systems Design (ISD) is the main, disciplined and systemic means for engineering effective learning systems. There has been a recent spate of nonsensical, superficial articles about ISD being overkill in today's demanding world. For this point of view, read Training Magazine's April 2000 issue.

What people fail to realize is that ISD is like an accordian that can be expanded or contracted to fit the context and constraints of our fast-paced, high-demand work environments. It offers a mental model for how to approach the engineering of learning interventions that achieve desired, verifiable results. Following the steps of ISD with fewer resources and less time increases the risk of missing the learning and performance target. However, it is still surer than intuitive instructional design which is where humans have been throughout history.

Rather than eliminate ISD, use it to develop structures, templates, systems to increase efficiency of learning systems design and development. Create tools to speed up the production process. Develop adaptable frameworks for instruction, practice and testing.

Don't throw the ISD baby out with the inefficiencies bath water. ISD is not dead. It needs to be better understood and adapted to our hyperproductive world.


ASK THE HSA EXPERT !

Your full name:
Title:
Company/Organization:

Website address:
Email address:

Questions:


Why HSA?
| How HSA Can Help | Our Approach | Our Results | Our Vision & Values | Our Principals | e-Brochure | Our Learning Solutions
Our Performance Solutions | Our Consulting Services | Our Clients | Our People | Workshops | e-Seminar Brochure | Newsletters | Articles | HSA Lexicon
Book Purchases | Ask Harold | What's New | Events Calendar | Links/Resources | Contact HSA | Publicity | Francais | Home

© 2000 - 2020 Harold D. Stolovitch & Erica J. Keeps

About HSA
Seminars
Contact HSA
Publications
Expert Q & A
Links/Resources
Home
Francais
HSA Solutions
Our Clients
Our People