Natural Resources CanadaGovernment of Canada
 
 Français ÿ  Contact us ÿ  Help ÿ  Search ÿ  Canada site
 ESS Home ÿ  Priorities ÿ  Products &
 services
ÿ  About the
 Sector
ÿ  Site map
Satellite image of Canada
Natural Resources Canada
International Division
.Home
.Guidelines for Client Satisfaction Measurement Activity
.Guidelines for Working with Industry
.International Business Development
.Canadian Earth Sciences Capabilities
.Earth Science Related Links
.Export Assistance
.International Markets
.Pricing and Distribution Policy
.Promotional Brochures
.Service Standards
.Specific Service Standards by Branch/Division
.Key Service Contacts at ESS
.Other Links
.International Activities


Proactive disclosure


Print version Print versionÿ
ÿEarth Sciences Sector
Natural Resources Canada > Earth Sciences Sector > International Division
Guidelines for Client Satisfaction
Measurement Activity
4. An ESS Process for Measuring Client Satisfaction
Previous (3.  Why is it Important to Measure Client Satisfaction?)Index (Guidelines for Client Satisfaction Measurement Activity)Next (5. Common Pitfalls and How to Avoid Them)

The following is a practical eight-step process for carrying out any CSM initiative which will help ensure that the project is conducted systematically.

A Systematic Process for Measuring Client Satisfaction!
  1. Identify key programs, products and services
  2. Determine who your clients are
  3. Determine your objectives
  4. Develop your measurement strategy
  5. Choose an approach for obtaining client feedback
  6. Design and test your instruments
  7. Gather and analyse your information
  8. Put your findings to work

Step 1: Identify your key programs, products, and services

The first step in the process of measuring client satisfaction is to clearly identify your programs, products, and services. This is not always as simple as it may appear. For example, scientific research has not always been viewed as a product or service in the traditional sense, yet there are several important client communities who may have an interest in the research that is produced or published by the Sector. Thinking about the output of your work and who receives, uses or benefits from it is often a useful starting point for identifying relevant programs, products, and services.

Step 2: Determine who your clients are

This also can be more difficult than it first appears because, in addition to direct clients, you may also have to consider the impact of your activity on indirect clients. Indirect clients do not normally interact with ESS personally but still have a stake in how a product or service is provided. For example, although the general public ultimately benefits from the knowledge gained from PCSP support of scientific field parties, the parties themselves are direct clients. Similarly, the Geological Survey of Canada (GSC) produces geoscientific information and research of specific interest to the science, engineering and academic communities. Yet, this information is made widely available to the public.

"... in addition to direct clients, you may also have to consider the impact of your activity on indirect clients."

Generally, the best way to identify your client audience is to think first about who receives the product or service (output) you provide and second about who is the ultimate user of the product or service. Often, the person who receives the product or service you provide will be an intermediate client in a chain of transactions leading to an end-user client. For example, topographical maps from the Mapping Services Branch are sold under license agreements to retail distributors who, in turn, sell to the general public. Hence, third party issues become important as well.

Within NRCan generally and ESS specifically, there are a wide range of clients. Some employees serve mainly internal clients, others deal with external clients such as federal or provincial agencies, the scientific and professional communities, or the business community; still others deal with the general public. Depending on the specific product or service, you may be dealing with several different client segments. In this case, you must determine whether you want to measure ALL segments or a selection (based on criteria which would identify the "key" clients).

Often, the person who receives the service you provide will be an intermediate client in a chain of service transactions leading to an end-user client.

Step 3: Determine your objectives

Once you have identified your key programs, products, and services and determined who your clients are, you are then in a good position to determine your objectives for measuring client satisfaction.

You should begin by asking yourself two key questions:
  1. Why do you want to measure client satisfaction?
  2. How will the information be used?

You should also consider how your answers to these questions support broader policy or program objectives. Your answers will also help you to be specific about your information needs and to decide on the measurement strategy which best supports your objectives.

Information can be used for product or service improvement, for optimizing investment in the service effort, for consolidation or pruning of products or services, and for performance accountability, such as whether service standards are being met. Often, deciding on how and to whom the results of a measurement initiative are to be communicated is a key consideration in setting the objectives.

Questions to consider:

Are your clients' expectations consistent across and within the different client segments?

Are client expectations realistic given the organization's ability to meet these expectations?

Does client satisfaction vary among the client segments, among regions, over time?

Do your service standards relate to your client's expectations?

When formulating your objectives, you need to consider the specific indicators you will use to measure satisfaction. Normally, you will want to go beyond simply asking clients if they are satisfied with your product or service; you will also want to determine the degree of importance clients attach to a product or service and the extent to which you are meeting their expectations.



Step 4: Develop your measurement strategy

At this point, you have to decide how, when, and from whom information will be sought. The approach chosen will depend on your objectives, as well as constraints such as budget, timetable, level of detail and accuracy required, and availability of qualified personnel. If your CSM project involves contracting with outside consultants, you may need to obtain approval from CCSB-PWGSC before proceeding. NRCan's Communication Branch can provide you with advice regarding contracting with external research suppliers and contractors.

Your overall measurement strategy may be based on a combination of approaches resulting in multiple lines of evidence from which to draw conclusions regarding client satisfaction levels. For example, client feedback may be sought proactively through direct client outreach or through indirect means such as the analysis of existing client data records.

With regards to particular initiatives, you will need to decide on what degree of formality is required. For example, research involving probability sampling where all members of a population have a known probability of being in the sample tends to require more formality than non-probability approaches. Furthermore, if the objective is to obtain data that will be communicated to others less familiar with the background and context of the initiative, a more formal process is warranted. Where feasible and when formal measures are required, you should use a systematic as opposed to a non-systematic (or ad hoc) approach to measuring client satisfaction (following the steps outlined in these guidelines will help ensure that the initiative is approached systematically). Remember that satisfaction is relative rather than absolute. A well-conceived strategy will consider the need for information that can be compared over time or to a meaningful benchmark. This will allow you to determine whether satisfaction levels are improving, staying the same, or declining.

"... there is little point to investing in areas where client priority is low and satisfaction high when there may be a much better return in areas of high priority and low satisfaction."
Canada's Auditor General

NOTE: For more information on the merits of one approach over another, see the NRCan publication entitled "Conducting Public Opinion Research" (listed in the Selected Bibliography).

Tip: Developing and maintaining complaint and recovery systems are important aspects of an overall client satisfaction strategy which need to be considered!

Step 5: Choose an approach for obtaining client feedback

Several approaches can be used to measure client satisfaction, including: client surveys (mail, telephone, electronic), client consultations (focus group sessions, panel discussions, personal interviews), and observations. Any one or a combination of these approaches may be appropriate depending on your objectives and, of course, your constraints. Each approach has advantages and disadvantages.

Client Surveys - are usually undertaken when you wish primarily to obtain statistical (quantitative) data regarding satisfaction among your client population. Normally, feedback is sought by telephone, mail or electronically from a representative sample of the target client population. In a probability sample, findings would be projectable, within a given margin of error, to the entire population of clients. Thus, it is possible to draw rather definitive conclusions regarding the views of all clients in the target population towards a product or service.

Pros:
  • statistical reliability
  • data can be projected
  • allows for trend monitoring
Cons:
  • limited ability to explore issues
  • relatively costly
  • no face to face contact

Client Consultations - are undertaken when your goal is to obtain qualitative feedback from selected clients. Qualitative research is explorative and is often used to give managers a better understanding of an issue prior to conducting more costly, in-depth research. Qualitative feedback cannot be taken to represent the views of clients in general; findings from client consultations should be considered indicative rather than definitive. Client consultations can be formal or informal and can be extensive or limited in scope. Three common types of consultations are: focus groups, panels, and personal interviews. Formal focus group sessions usually involve establishing screening criteria to recruit panelists that are representative of the target audience, a moderator's guide of discussion themes, the use of a moderator to guide the discussion and manage the group dynamics, and a report documenting the results.

A focus group is much like a group interview and, as with personal interviews, a lot of valuable information can be documented (even from verbatim comments) in a relatively short period of time.

Panel discussions are also similar to focus groups but differ in the degree of formality used. Many panels are formed for advisory type purposes, hence panelists are often chosen for their expertise, knowledge or experience with a particular issue or subject. Panel discussions do not necessarily require formal recruitment screening criteria, special facilities or professional moderating. However, if an objective of the discussion is to obtain formal feedback pertaining to client satisfaction, you should approach the discussion systematically.

Personal interviews can be conducted by telephone or face-to-face to allow for in-depth probing. Provided confidentiality is assured, they are often ideal for obtaining feedback on sensitive topics or issues, especially when conducted one-on-one. You should develop and use an interview guide when conducting personal interviews. This will help keep the interview on track, provide consistency across interviews, and serve as an aid in assessing findings.

Pros:
  • can explore in detail how clients view an issue or concept
  • can adapt instantly to client responses
  • can access hard to reach audiences
Cons:
  • cannot project findings to population
  • does not yield statistical measures
  • results are not conclusive

Tip: To assist in keeping costs down consider
  • using readily available information;
  • using qualitative approaches;
  • using a survey sample rather than a census

Observations - can be direct or indirect. Examples of direct observations include the number of visitors to a site, length of time it takes to serve clients, and client reactions to a product or service. Indirect observations might include whether employees follow the proper service standards and procedures when dealing with clients or whether complaints have been dealt with in a timely and efficient manner. Observation checklists are often used by observers when recording information. Mystery caller (or shopper) programs are an effective way to assess service levels using observation. In such programs, researchers pose as customers and observe and record key performance aspects of the service transaction.

Pros:
  • information can be objectively verified
  • no effort required on part of respondent
  • can be qualitative or quantitative
Cons:
  • employees may react negatively to being observed
  • difficult to control intervening variables
  • satisfaction can only be indirectly inferred from the observation

Tip: Be careful of the complaint trap
Low complaint rates do not necessarily reflect high levels of client satisfaction. Research has shown that less than 4% of dissatisfied clients complain. Reasons clients don't complain include lack of interest (easier to switch to a competitor where that option exists!), not knowing where, how, or who they should complain to, or because their geographical location makes it more difficult for them to do so.

Question wording

How satisfied are you that ESS is meeting its commitments to Canadian taxpayers through quality service?

Although initially this question seems to make sense, after closer inspection several flaws become evident:

  • there is an inherent bias due to the fact that no reference is made to dissatisfaction.
  • it is subjective because it asks respondents to speak for others (Canadian taxpayers).
  • it is unclear whether responses apply to meeting commitments, quality service, or both.
Keeping it simple: try
How satisfied or dissatisfied are you with ESS's service?

Step 6: Design and test your instruments

Data collection instruments such as questionnaires, interview /discussion guides, moderators' schedules and observation checklists should be designed with the measurement objectives in mind. Consider using closed-ended questions (versus open-ended) when designing a survey questionnaire as they take less time for respondents to fill out and are easier to process. Although dependent somewhat on the media used, try to keep survey response times to a reasonable length (15 to 20 minutes) otherwise respondent fatigue will become a factor and may adversely affect the response rate. For in-person interviews, one hour maximum is a good yardstick to use. Also, keep your questions as neutral as possible to avoid bias.

Tip: Balanced Scale!
A balanced scale offers equal opportunity for favourable or unfavourable responses.

To what extent are you satisfied or dissatisfied with ESS's service? Would you say you are:
[] very satisfied
[] somewhat satisfied
[] neither satisfied nor dissatisfied
[] somewhat dissatisfied
[] very dissatisfied

Survey questionnaires should always be pre-tested with a small sample of respondents (usually up to 10) to determine whether questions are clear and logical, and to assess overall ease of administering as well as the respondent's general level of comfort with the line of questioning. Revisions should be made taking into account the results of the pre-test. Final instruments should then be prepared.

Step 7: Gather and analyse information

It is important to be as accurate and efficient as possible when gathering information. Depending on the size and complexity of your initiative, you may want to seek assistance from outside consulting firms who can supply resources and expertise. The use of trained, experienced personnel can be an important success factor - particularly in large surveys.

When gathering and analysing information, take appropriate steps to ensure that objectivity is maintained; this will protect the integrity and credibility of the findings. Ensure that sufficient arms-length protocols are in place to avoid the possibility of actual or perceived bias ("students should not be marking their own exams"). It could mean getting assistance from internal third parties (e.g. NRCan Communications, Quality Advisor, Business Development) who may be able to contribute their expertise but who do not have a direct stake in the outcome. This might also involve the use of outside consultants.

Take care to interpret findings as objectively as possible. Low satisfaction scores are not necessarily a bad sign - the real value of client satisfaction measurement to ESS as an organization is in identifying opportunities for improvement.

Communication is essential - widely distribute findings to staff (esp. front-line staff) and to your clients. In the end, the ultimate value of any CSM activity depends on how much the findings are used.

 

Satisfaction vs Importance: An ESS Example
In a recent ESS survey, courteousness received the highest satisfaction score among 12 service attributes but ranked only 10th overall in importance. This suggests the focus should be on other attributes.
The same survey found that the largest gaps, i.e. those representing the best opportunity for improvements, were for amount of time it took to serve you and availability of service.

Step 8: Put your findings to work

It is essential to capture the background procedures, findings, conclusions, and recommendations from formal client satisfaction measurement initiatives in a written report. This will ensure that information can be communicated to others (such as staff or superiors) who also have a stake in the initiative and its outcomes. You should also consider making a summary of the results available to participants upon request. Documenting the findings will also provide a basis for comparative analysis over time, if desired. When reporting findings, keep in mind that absolute measures of client satisfaction are less meaningful than comparative measures such as measures of the "gaps" between the level of satisfaction associated with an aspect of a product or service compared to the importance or expectations associated with it. To illustrate, the finding that four out of five clients (80%) are very satisfied with a particular aspect of your product or service takes on added meaning if you know that clients also place a high degree of importance on that attribute compared to others. Similarly, if the 80% figure represents an increase of 10 points from a study conducted a year previously, then the trend may be seen as being quite positive.

Ultimately, how you act on the findings (from a client perspective) will decide the success of the initiative. The increasing focus on performance results means that it is no longer sufficient simply to treat reporting as the final phase of the measurement process; findings must be used to implement meaningful change - change that leads to performance improvement. The actual use of results will differ depending on the nature of the initiative but should be clearly thought out when determining your overall objectives.

The increasing focus on performance results means that it is no longer sufficient simply to treat reporting as the final phase of the measurement process; findings must be used to implement meaningful change - change that leads to performance improvement.
Top

Previous (3.  Why is it Important to Measure Client Satisfaction?)Index (Guidelines for Client Satisfaction Measurement Activity)Next (5. Common Pitfalls and How to Avoid Them)


2006-08-02Important notices