Jeff Weinrach

New Mexico Environment Department Introduction. Several years ago, New Mexico embarked on a strange, yet wonderful, odyssey - the Green Zia Environmental Excellence Program. The Program (Green Zia) was designed to take the successful elements of the Malcolm Bal-drige Criteria for Performance Excellence and state Quality Awards Programs such as Quality New Mexico, which utilize the Baldrige Criteria, coupled with the principles of pollution prevention (P2) and Environmental Management Systems (EMS) and develop an effective set of EMS tools and methodologies designed to provide organizations in New Mexico, especially small businesses, with a long-term viable approach to sustainable development through environmental excellence. At the time, we were the only state that was formally adopting the Baldrige Criteria, albeit modified, as the cornerstone to our P2 initiatives. After five years of implementation, the Green Zia Program has become a model for other state P2 programs and is well aligned with many federal and international environmental management initiatives.

One of the principal elements of Green Zia that will be the focus of this paper is the third-party assessment and feedback report process. Like our Baldrige brethren at both the national level and at the state Quality Award Program level, Green Zia employs a number of volunteers to read applications that are submitted to our state Environment Department, provide nonprescriptive feedback on strengths and opportunities for improvement, determine a "score" for the application based on a 1000-point scale, and possibly conduct a site visit to verify/clarify the application. This paper will describe all aspects of the third-party examination process and will provide some personal insight as to the strengths and opportunities for improvement of our own process. Background. The Malcolm Baldrige Criteria for Performance Excellence and the application processes, both at the national and state levels, typically include the following elements:

• Organizational development of an approach to quality utilizing the Baldrige Criteria as a framework;

• Submittal of an application (upwards of ^50 pages) that describes the organization's approach to quality and the applicant's key results;

• Review of the application by a team of examiners, who prepare a feedback report including both strengths and opportunities for improvement and a score based on a 1000-point scale;

• Determination by a panel of judges as to whether the applicant, based on the feedback report and score, will receive recognition and/or an award by the sponsoring organization; and

• Improvement of the organization's approach to quality, based, in part, on the feedback report.

The Baldrige Criteria contains seven "categories" of quality that include leadership, planning, information management, and results. Recognizing the inherent similarities between quality and environmental management, we developed the Green Zia Program to use a similar set of seven categories to address various aspects of an organization's EMS. Six of the seven categories address approach and deployment of EMS processes and systems: How do the various components of the EMS work and to what degree are they being deployed throughout the organization? The seventh category specifically addresses the results obtained through the deployment of the EMS. For organizations that have an EMS in place and are looking for opportunities for improvement, each category is divided into "items" that allow the applicant to describe aspects of their EMS in greater detail. For example, the leadership category is divided into two items: organizational leadership and community leadership. For organizations that do not have an EMS in place yet, we recommend that their application be written to the category level (~10 pages) instead of to the item level. The feedback reports are written at the same level as the application.

Each category (or each item) has a number of points associated with it. The total number of points that an applicant can receive is 1000 points. (Note: Baldrige winners typically will score around 700 points.) The leadership category for Green Zia is worth 125 points out of the total of 1000 points. The results category for Green Zia is worth 325 points (or approximately one-third of the total points available!). This implies to the applicant that the results section is of a higher "weight" than any of the other categories in terms of an overall score. The score for a particular approach/deployment category (how many points awarded compared to the total number available) is based on a number of factors such as whether a particular EMS approach is systematic, well-deployed across the organization, aligned with environmental and organizational goals and objectives, and so on. The score for the results category is based on the EMS performance levels, trends, and comparisons to other organizations as well as the relevance of the results to the overall EMS deployment. If an applicant were to reapply to Green Zia or to a state Quality Award Program on a regular basis, a different group of examiners would be used each time to review the application. This is to assure that each review is objective and is based solely on the merits of the current application. Likewise, examiners who have profound knowledge of the applicant or who have a conflict of interest with the applicant would not serve on the particular examination team.

In spite of these precautions, there is an inherent subjectivity to the application review. Teams may be comprised of four to seven people, typically of diverse backgrounds. Each team member reviews the entire application. Consensus is used to ensure that the team agrees on all aspects of the review process. Because of the subjectivity in the process, scores are usually not provided to the applicant directly. The applicant may be provided with a scoring "band" to let them know where the examination team thought the applicant's system resides (100-200, 300-400, and so on). The thinking is that if an application scores a 350 one year and a 325 the next year (with a different review team), the applicant may think that their EMS has not improved when, in fact, it may have improved but was evaluated slightly differently by the second review team. Many organizations that utilize Baldrige as their quality framework conduct an internal self-assessment using the Criteria, in which case the score is known by the applicant and used to assess levels of performance.

Baldrige and Green Zia are built upon a foundation of performance. That is, there is no "bar" that examiners look for relating to some particular level of effectiveness. Performance-based systems rely on measuring effectiveness as it relates to competitors, industry standards, or benchmarks, or other approaches relevant to the applicant. Since these programs are not "certification" programs, we are not looking for a particular scoring threshold to be reached to tell an applicant that they have "succeeded" in their EMS implementation. On the contrary, the focus of these types of approaches is to focus on continuous improvement through analysis and assessment, not through certification. Choice of Examiners. As was stated in the previous section, examiners should not have a conflict of interest with the applicant to maintain objectivity. The feedback reports do not prescribe solutions, nor do they use language such as "ineffective, inadequate, wonderful, excellent," and so on. Examiners, who have an interest in the applicant, whether it is positive or negative, have a difficult time not using this type of language. Effective examiners should have intimate knowledge of the criteria but do not have to have intimate knowledge of the applicant. For example, schoolteachers or administrators may be examiners for manufacturing applicants. As long as the examiners understand the criteria that the application is based upon, knowledge of the applicant is not necessary. The only exception is if the applicant uses a lot of lingo and terminology that examiners who are not familiar with the particular sector would not understand. In this event, it is helpful to have at least one member of the examination team who is familiar enough with the applicant's type of business that the lingo is understood.

As mentioned earlier, examination teams usually are comprised of four to seven members to assure diversity of viewpoint. Consensus is used to make sure that the team is agreeable to all aspects of the feedback report. Consensus is often a difficult process, but it is necessary to make sure that the feedback is objective and as reliable as possible. Teams are often comprised of a combination of new and experienced examiners, again to ensure diversity of viewpoint and to minimize subjectivity. Application Review. Once the examination team has been identified, their primary job is to read the application and to prepare the feedback report. Reading a Green Zia application is not like reading a novel or even a proposal. It is more like trying to read a road map. There is not an obvious beginning or end; it is more where the application takes the reader. Perhaps the most important section of the application is the organizational overview, which describes for the examiners what the organization does and why it does it. It also describes the organization's primary environmental impacts and why the organization is interested in minimizing or eliminating these impacts. Ideally, the rest of the application tells the story of how the organization is addressing these impacts logically and systematically. If anything, the review of the application needs to be thorough, especially considering that the applicant may have never described all aspects of its EMS to this level of detail before. As examiners, we are looking for consistency and context. Are approaches sound and systematic? Are they deployed in all appropriate work units? Is there a "system" in place or are activities carried out in a random manner? Are the results aligned with the processes that generated them? These are some of the questions that we consider during an application review.

After reading the application thoroughly, each examiner generates comments for each category of the application. The comments are nonprescriptive complete thoughts about the applicant's approach, deployment, and/or results relating to components of their EMS. For example, in the planning category, an individual examiner may comment that the applicant's EMS planning process does not include input from the organization's key customers and stakeholders (an opportunity for improvement), or the applicant's EMS information management process collects relevant data from all available sources (a strength). The comments usually include a statement illustrating the "context" of the comment; that is, why is the comment relevant to the organization. These "so what" statements usually reflect something that the applicant stated in the organizational overview as being important to them, such as their position in the market or the demographics of their customers. This is one of the major differences between Green Zia and other EMS approaches: Green Zia views every organization that applies as different - different issues, different approach to their EMS, and so on. The leadership system for a local machine shop may look quite different from the leadership system of General Motors. Each can be effective in its own right, but we should not expect the same approach to be followed. The organizational overview often provides the examiner with that critical context to determine whether the approaches used by the applicant "make sense."

Typically, we use what we call "category champions" in the consensus process. Even though each examiner needs to review the entire application and provide comments for each category or item, we typically employ category champions who are team members responsible for the consensus discussion for that particular category. For example, I may be the category champion for category 2 (strategic planning), which means that I will lead the consensus discussion for category 2 even though all team members have written comments for category 2. I should probably know the category 2 section of the application well if there are any issues that come up during consensus that the group needs to resolve.

After consensus has been reached for all seven categories of the application, the team then determines the score of the application based on the team's consensus. Scoring must be done after consensus. I have seen many examples where one team member was totally smitten with a section of the application and another team member thought that the section was inadequate. The consensus discussion for this section must allow for these two examiners to come to an agreement about the strengths and opportunities for improvement for the section. If they were to score prior to consensus, the team member that liked the section would score it high (70 percent for example) and the other team member could score it low (20 percent for example). This does not mean that the "real" score for that section should be 45 percent. It could be that one of the examiners saw something that the other examiner overlooked. In that instance, consensus would determine what the overall team impression of that section was and subsequently what the score of the section should be.

One note regarding consensus - it is not compromise. It is not "I'll give you 10 percent on leadership if you give me 10 percent on planning." Consensus means that each team member can "live" with the decision of the team. All points of view were heard. If one team member cannot agree with the direction that the team is moving regarding the review, then consensus has not been attained and the group will need to do more work to achieve consensus. Scoring. In the scoring process, which is also condensed after each category has been agreed to by the team, we are looking for an overall sense of the application by section. Scoring is divided in scoring bands, both for approach/ deployment categories and for results categories. Each scoring band describes to the examiner what a section would or would not have to score in that particular band. For example, a scoring band of 30 to 40 percent in the approach/deployment categories has the following attributes:

A sound, systematic approach, responsive to the basic purposes of the item;

Approach is deployed, although some areas or work units are in early stages of deployment;

Beginning of a systematic approach to evaluation and improvement of basic item processes.

A particular item of an application may have two of these three but not the third. We would still score it in the 30 to 40 percent band even if all three of the bullets have not been recognized by the examining team. It is basically the best "fit" that we use to determine scores for the application. The scoring bands for the results section are quite different than for the approach/deployment sections since we are looking for the quality of results in that section as opposed to how sound and systematic a process is or to what degree the process is being deployed.

After the scoring has been completed, the team prepares the feedback report for the applicant, which includes the consensus comments for each section of the application plus an executive summary, which highlights the most significant strengths and opportunities for improvement for the applicant. It is essentially like an abstract. Typically, the category champions write their particular sections since they know that section intimately. After the feedback report has been written (and edited!), the team usually performs a sanity check looking at both the feedback report and the score to see if they are commensurate. We also look for inconsistencies in our feedback report between various sections. We do not want to say that a process has strength in one section of the application and say that the same process is an opportunity for improvement in another section of the application. Organizations who receive feedback reports from Malcolm Baldrige, state Quality programs, or Green Zia put a lot of attention into the feedback reports as an effective approach for continuous improvement. We owe our best effort to these applicants in order for them to make the best decisions about improvement. Site Visits. After the feedback report and score are submitted to a panel of judges, who make recognition and award decisions, the judges may ask the examination team to conduct a site visit to verify or clarify the application. This is typically done if the judges feel that the applicant might be "worthy" of an excellence award, and the judges want to make sure that the application is stating the truth. If the team is asked to conduct a site visit, they are not going to be answering many questions about the Green Zia process or what the team thought about a particular process, system, or result. The team is there to verify and clarify what the application stated. Does the process exist? Is it being used across the organization? Are the results what the application says they are? Site visits have two primary aspects to them: interviewing people and looking at data and reports. Interviewing people is certainly an interesting aspect to the site visit. It is not unusual to get a different response to a particular question if you interview senior leadership versus the maintenance department or the third-shift workers. You can really see if deployment is occurring if everyone you talk to has a similar answer to a question. It is often helpful to speak to people randomly so that management cannot "brief' people on how to answer particular questions.

Site visits for Green Zia last two days. Site visits for the Malcolm Baldrige Award take up to a week. The work is tiresome but fulfilling for the examination team. They often meet during the site visit (team members may split up during the site visit to talk to as many people as possible) to compare notes and discuss what they have observed and learned.

After the site visit, the team usually will revise the feedback report accordingly. Comments may be removed or modified to address what was discovered during the site visit. Opportunities for improvement that started with "It is not clear," which is a common preface to an opportunity for improvement may be replaced with "The applicant does not have." Sometimes strengths become opportunities for improvement or vice versa. The team does not necessarily need to rescore the application but they do need to convey to the judges whether their impression of the applicant after the site visit was better, worse, or stayed the same. This will allow the judges to make any final decisions regarding award or recognition level. Exit Interviews. After the applicants have received their feedback reports (they do not receive scores but may be notified as to what overall scoring band they were at), the applicant may ask for an exit interview with the examination team. This is where the applicant can ask the team "What did you mean by this comment?" If the applicant asks: "What should we do about this?" or "How can we improve this?", the team usually does not answer those types of questions. It is meant more to clarify what the feedback report was addressing. Programs such as Green Zia and Malcolm Baldrige provide applicants with a great deal of training and technical assistance to help them improve their organizations. It is more appropriate for applicants to use these vehicles than to ask the examination team how they can improve a particular process. Conclusions. The time commitment to serve as an examiner for a program such as Green Zia is significant, but the process is extremely valuable and a lot of fun. It is a great learning experience for those who want to see how others address their EMS issues. We often recommend that organizations who are thinking of applying to Green Zia have someone in their organization become an examiner first just so they can see how the process goes. As programs like Green Zia continue to grow and evolve, we will continue to learn much from our examiners, who are our primary eyes and ears.

Internet Entrepreneurship Survival Guide

Internet Entrepreneurship Survival Guide

Master The Backwoods of Internet Entrepreneurship All Distilled into a Single Most Powerful Guide! Like a long pole, that can shift a great weight with little effort such is the case with succeeding in business. Your chances of succeeding-as an 'army of one' fall somewhere between zip, zilch and nill.

Get My Free Ebook

Post a comment