Buyer Community> Trade Intelligence> trade knowledge> Intel built its own basic scorecard to measure process compliance

Intel built its own basic scorecard to measure process compliance

Published: 27 Oct 2010 21:40:43 PST

by Terry Leip

In 50 Words or Less:

  • Intel’s IT organization developed a simple scorecard to measure the progress of projects and monitor process compliance.
  • The scorecard included proof points and could address multiple product life cycles.
  • Keeping the scorecard simple helped everyone understand that process compliance is an issue an entire organization needs to own.

Five years ago, Intel’s IT organization decided to improve internal customer satisfaction and the efficiency of IT projects with a Capability Maturity Model Integration (CMMI)-based process improvement activity to develop standard processes for project management.

As a part of that effort, quality assurance audits were introduced to help measure and understand the progress in process compliance and to drive process improvement. In 2007, the number of projects using standard processes increased from 25 to more than 150. Audit resources, however, stayed constant, and management still wanted some level of compliance measurement for each project.

The quality assurance audit team needed a way to provide more continuous feedback about process compliance without expanding the time and effort involved in conducting audits.

The decision was made to provide a scorecard for project managers (PM) to report compliance. This decision was backed by randomly auditing projects to verify compliance and understand the accuracy of self-scoring.

While there wasn’t a clear vision for the solution, management and other stakeholders identified a few high-level requirements, shown in Table 1. Adding to the challenge, the first version of the scorecard needed to be ready for limited deployment in a few weeks, so any solution needed to be constructed quickly.

Approach

Given the requirements and short timeframe, our team decided to use the criteria defined in our existing audit checklists as the basis for the scorecard. In addition, we decided to keep the methods as simple as possible because we knew we would likely need to explain every aspect of the scorecards to our stakeholders at deployment. We broke down the work into four main tasks:

  1. Aligning the process requirements between life cycles.
  2. Defining weighting methods.
  3. Defining scoring methods.
  4. Creating the scorecard.

1. Aligning process requirements

We had five distinct product development life cycles in our organization, each with its own audit checklist, shown in Table 2. These separate audit checklists made it difficult to meet the requirement for providing a standard score regardless of life cycle, so we moved the list of deliverables for all life cycles into a single MS Excel worksheet.

We aligned deliverables and activities common among the life cycles by focusing on the intent of the deliverable or activity (even if they had different names or minor content differences). Each of these groups of common items was assigned a title, which we called a proof point. For each proof point, a set of criteria was documented addressing the absolute minimum required to demonstrate a proof point had been completed.

Timing for completion of the proof point (in other words, when it was due) was added in the form of the product life cycle (PLC) phase. For example, the requirements peer review proof point must be completed before the requirements baseline proof point.  

Table 3 shows a partial example of the alignment of the process requirements for "code peer review" and "construction baseline" proof points for the Cascading Waterfall and eXtreme Programming product life cycles. In some cases, no analogous process requirement existed for a life cycle, so criteria for these were simply marked as N/A.

2. Defining weighting methods

Because some proof points were considered more or less important than others, we needed a method to ensure they would have a corresponding impact on the overall score. Assigning a weight to each proof point was an easy solution, but determining the correct weight was a challenge.

Lacking any organizational data that could tell us the relative importance to project success of one proof point vs. another, we relied on a small team of experienced PMs and auditors who discussed how to select the weights. Agreement on this topic was dramatically aided by two key concepts:

  1. Limit the number of weights: Too much freedom in weighting made decisions difficult, and fewer weighting choices simplified consensus. We settled on four possible weights (see Table 4), which also made the concept easier to explain to scorecard users.
  2. Weight categories of similar items first: Group proof points of similar types together (reviews and baselines) and weight the entire category, then identify weighting for any exceptions to those categories. Again, this reduced the number of decisions and drove consistency across similar proof points.

3. Defining scoring methods

Developing the scoring method was another challenging aspect of the scorecard because we needed a simple approach, yet one that could still provide a useful level of detail about compliance. Stakeholder suggestions ranged from a "done/not done" approach to a complex, multidimensional scoring method that separately scored timing, sufficiency, completeness and correctness.

While the done/not done approach had strong support from management, proof points that were late or incomplete were considered the same as those not done at all. This could limit our insight into the actual level of compliance. The more complex approaches provided a finer level of detail, but those approaches required too much time and effort to learn and use.

Seeking a balance, we settled on a four-level score, shown in Table 5. While simple, the scoring criteria offered more detail than just done/not done.

A blank entry in the score identifies a proof point that isn’t yet due in the life cycle or for which a deviation exists. These blank proof points are not included in the score calculation, so the project isn’t negatively impacted by being early in the life cycle and only having a few proof points scored.

The clause "completed more than one life cycle phase late" was added to the zero scoring (meaning an item was not completed) criteria after we discovered project teams were completing the required work for some proof points extremely late. Teams were attempting to improve their scores retroactively by going back and completing missed deliverables long after any value would have been added to the projects. In one instance, a team baselined its requirements after project closure (and prior to a quality assurance audit), hoping that it could improve its score for that item.

The frequency for updating the scorecard twice a month was established by IT management. At those times, the PM must evaluate the status of any items completed on the project, and then update the scorecard accordingly. Because projects of average duration generally complete about one to two proof points in a two-week period, this provided a reasonably current score without being overly burdensome.

4. Creating the scorecard

Each project needed visibility to proof points and criteria that pertained to its life cycle, so we created separate per-life-cycle scorecards in the same workbook on separate worksheets.

To reduce data duplication and ensure consistency, we used a MS Excel formula to reference the proof point name, criteria and PLC finish phase from the single side-by-side list of criteria, which was stored in a hidden worksheet in the same workbook.

This formula recognizes the difference between a common criteria (criteria that is the same for multiple life cycles) and unique criteria (criteria that is specific to a single life cycle). It also recognizes that using this information and criteria applicable to more than one scorecard needs to exist in only one place. This prevents duplication of criteria and greatly simplifies maintenance and verification activities.

Columns were added for PMs and auditors to score each proof point, as well as a column to explain the reason for discrepancy, allowing PMs to document the reason for any 0 or 0.5 scores or any deviations that existed for a proof point.

Auditing

Prior to the introduction of the scorecard, audit notes were entered into an audit checklist, and a report was created in a separate document based on those notes. The PM was provided the report at the audit debrief meeting and stored it with the project collateral.

Using separate documents resulted in wasted time copying, pasting and reformating information from the audit checklist into the audit report. Because the information was divided between two documents, it was also time consuming to collect the data when producing the monthly audit summary reports.

With the audit score now accessible in the scorecard, it seemed only logical the audit report fields (Table 6) should be located there, too. Table 7 shows an example of how some of those fields now appear in the scorecard. This approach did have the drawback of making the scorecard physically larger; however, many agreed that having only one document saved much time and effort.

Deployment

The scorecard was deployed as part of a larger set of process improvements, so communication and support were addressed as a part of those changes. We have an existing deployment and support system (newsletter, process coaches, process training, website and various communities of practice) that provides details on the scorecard and its workings.

Because we knew management would be reviewing the scores, we suspected there would be anxiety around this topic. We arranged a series of question-and-answer sessions for individuals to call with specific questions about the scorecards. Callers typically had questions about unusual situations, and the majority indicated they understood how to interpret and use the scorecard. 

Overall, the deployment went surprisingly smoothly, and we attributed this to the relative simplicity of scoring and providing scorecards customized to the terminology of each specific life cycle.

High scores for scorecard

From the standpoint of simply meeting requirements, the scorecard was deemed successful (see Table 8). The real measure of success has been its widespread use throughout the IT organization and the resulting changes in behaviors. For instance:

  • The score produced by the scorecard is now a required metric for every IT project that lasts more than eight weeks and is monitored by multiple levels of the organization’s management.  
  • Scores have steadily improved since the scorecard was implemented, and they are now above 90% in all nine IT divisions.
  • In an informal survey of PMs, 90% said they felt the scorecard made audits less stressful.
  • Audit effort was reduced by 15% from the previous quarter, largely due to eliminating the copying and reformatting of audit data.

In late 2007, the Excel version of the scorecard was automated and added to the IT PM dashboard online status tool. This allows PMs to score projects more easily, and overall scores are automatically rolled into IT metrics reports. The online version also ensures PMs make updates twice per month as required.

Auditing was decentralized in 2007, and ownership of compliance was moved to each of the divisions. Only two organizations have continued auditing to confirm the accuracy of the scorecards. Their scores remain high. Those organizations that discontinued auditing have kept their scores above 90%. There is doubt, however, on the accuracy of the data because there is no longer independent confirmation. 

While still in discussions, there are calls from some corners of the organization to resume some method of independently verifying the scorecards to improve accuracy issues.

While we understood at the outset that our scorecard would not be an extremely sophisticated tool, we also knew a simple tool used correctly would be superior to something more complex used incorrectly or not at all. The widespread acceptance of this tool by all levels of the organization allowed us to move process compliance from being an audit-driven issue to one the entire organization owns.

© 2009 Intel Corp.


Terry Leip is a senior quality engineer at Intel Corp. in Chandler, AZ. He holds a bachelor’s degree in biology from Grand Canyon University in Phoenix. Leip is a Six Sigma Green Belt and a member of ASQ.

Share this post:
Related Article
Most Popular
icbunews010176201116