Asset Data Integrity Is Serious Business


Bookmark and Share

Availability: In Stock

$59.95
Asset Data Integrity Is Serious Business

Asset Data Integrity Is Serious Business

 

Robert S. DiStefano and Stephen J Thomas

 

Looking for the eBook version? Click here.


Overview

If your asset data is not reliable, you need to convince the organization of the enormous potential that is locked away. To accomplish this, you need to understand the breadth of the problem and the value of solving it. A viable business case for action is needed: let's get started!

 

Asset Data Integrity is Serious Business discusses physical asset data integrity, a critical aspect of every business. It is often the most valuable asset on the balance sheet, yet it is often overlooked. The data that we have about our assets collectively creates information, provides for accurate analysis and facilitates sound business decisions.

 

Without accuracy of asset data there is a strong potential for poor decisions and their negative consequences. This book will not only provide an appreciation of this fact, it will also provide a road map to achieving value out of something most CEOs, managers, and workers often overlook.

 

Features of Asset Data Integrity is Serious Business

  • Relies on the authors' decades of experience and hands-on expertise that cannot be obtained elsewhere.
  • Includes an assessment tool enabling the reader to easily recognize areas of improvement once a problem is detected.
  • Features a valuable practical "how to" information.
  • Focuses on the entire management spectrum, allowing everyone to see the value of data integrity within the context of their own responsibilities.

Asset Data Integrity Is Serious Business

by Robert S. DiStefano and Stephen J Thomas

 


Uptime Article Fig 1Robert S. DiStefano, CMRP is Chairman and CEO of Management Resources Group, Inc. He is an accomplished executive manager with more than 30 years of professional engineering, maintenance, reliability, management, and consulting experience. Visit www.mrgsolutions.com



 

 

 

Uptime Article Fig 2Steve Thomas has 40 years of experience working in the petrochemical industry. He has published six books through Industrial Press, Inc., and Reliabilityweb.com, the most recent being Asset Data Integrity is Serious Business and Measuring Maintenance Workforce Productivity Made Simple.

Asset Data Integrity Is Serious Business

by Robert S. DiStefano and Stephen J Thomas

 

Introduction to the Business Case


A vast and growing amount of data is being accumulated in businesses today, yet many people in business intuitively know there is not a corresponding improvement in reliable information on which to base good decisions. In fact, in many cases just the opposite is happening-despite more and more data, finding information that can be trusted is increasingly difficult.

Let’s be honest. To many people, no business subject is more dull than the subject of “data.” Nevertheless, the subject of data integrity is written about in business journals more often than many other seemingly more interesting topics. Furthermore, many surveys reveal a growing concern among business executives related to the ability to take advantage of the reams of data that are being collected.

Your intuition may tell you that there are large benefits associated with bringing integrity to your business data. We must admit, however, that intuition is not enough to garner the proper level of senior management support and resources to improve the data. You need a convincing business case whose development can prove to be very challenging for several reasons. First and foremost, the business case for data integrity is so vast, so far-reaching, so all-encompassing, and so pervasive in every aspect of business that knowing where to start, and how much of the story to tell, is a daunting proposition. We think the best approach is to frame the case in broad terms, citing specific facts and some quantitative examples that support the intuition that the business case for data integrity is huge. Armed with this information, you should then be able to personalize the case for data integrity in your firm or plant.



Information Overload


Consider this: The average installed data storage capacity at Fortune 1000 corporations has grown from 198 terabytes to 680 terabytes in less than two years. This is growth of more than 340%, and capacity continues to double every ten months! That statistic puts into objective terms what we all instinctively know about our data-we have huge quantities of it, and we are accumulating more and more every day.



Searching for Data

What else do we know about our data? In an article in Information Week (January 2007), writer Marianne Kolbasuck McGee found that average middle managers spend about two hours a day looking for data they need. The study does not comment on how often the search ends successfully, but we can assume that at least some of that time is wasted. Why? There are several reasons.

First, the volume of data is too large and most of it is not needed. To arrive at the needed data, one has to cull through reams of irrelevant or unnecessary data.

Second, the quality of the data-or data integrity-is generally poor. Much of the data is inaccurate, out of date, inconsistent, incomplete, poorly formatted, or subject to interpretation. Therefore, even when you do arrive at the needed data, can you trust it? If you have to hesitate to answer that question, you’re undoubtedly spending some time deciding whether the data you have finally found (assuming you actually found it) is trustworthy and whether you can rely on it to accomplish your task at hand.

There are other reasons, but these two alone are compelling. Let’s try to quantify these phenomena. The U.S. Department of Labor’s Bureau of Labor Statistics indicates that in May 2006 approximately 142 million workers were in the U.S. workforce. Assume conservatively that only 10% of those workers are middle managers. Also assume conservatively that only 25% of the two hours per day spent searching for data is wasted (many studies indicate the actual percentage is higher). The amount is 1,633,000,000 hours (that’s right ... billion!) that are wasted annually in the United States alone-equivalent to about 785,000 man-years annually!

To put these figures in financial terms, suppose $40 per hour is the average loaded cost rate for middle managers. Then $65,320,000,000 is wasted every year-that’s $65.32 billion annually, just in the United States! Imagine what this number is when calculated worldwide!

Can we put those 1,633,200,000 freed-up hours per year (or 785,000 workers per year) to good use? Most certainly!



Retiring Baby Boomers

Assuming we can fix the data integrity problem nationwide and free up these hours, some of the retiring workers won’t have to be replaced. Thus, the cost structure of the company will go down. According to the U.S. Department of Labor’s Bureau of Labor Statistics, approximately 22.8 million people aged 55 and older are in the U.S. workforce today-approximately 16% of the entire workforce.

Assume conservatively that the number of workers in this category does not increase. Further assume that the 22.8 million of them will retire evenly over the next ten years. In short, approximately 2.3 million workers will retire each year in the United States (the actual estimated number is higher). The freed-up hours related to data integrity could account for about a third of that. Thus, one-third of those retired workers would not have to be replaced-assuming we solve the data integrity problem.

Again, our assumptions in this example are conservative. It is entirely possible that simply fixing the data integrity problems could go a long way toward solving the aging / retiring workforce debacle in the United States and elsewhere.

This analysis deals strictly with an efficiency gain. We have not yet talked about the effectiveness of our efforts or, to put it another way, the impact of the “Brain Drain” on the knowledge residing inside the corporation.



The Brain Drain

More than 80% of U.S. manufacturers face a shortage of qualified craft workers. This shortage is because of the retiring workforce phenomenon, and the fact that fewer new workers are entering the skilled trades, or even technical degree programs. As a result, we don’t have a feedstock of replacement workers ample enough-or skilled enough-to replace the retirees.

This challenge should put the onus on management of our industrial companies to figure out how to leverage a potentially smaller workforce through eliminating wasted activities. It should also challenge them, perhaps more importantly, to institutionalize and memorialize the knowledge currently in the heads of the workers in the company’s systems and data sources. Wouldn’t meeting this challenge head-on facilitate and accelerate the accumulation of skills and knowledge on the part of new less-skilled workers?

In addition, the institutionalization of knowledge and information could facilitate the same work being satisfactorily accomplished by less-skilled workers. In other words, it is possible that we won’t have to completely replace the retiring workers in-kind. The combination of better systems, automation, information, procedures, guidelines, training media, etc. with less-skilled workers could represent a game-changing step change in how we go about doing the work in our manufacturing and industrial companies! That step change could permanently and favorably impact the cost of doing business.



A Business Case Example

Many studies in the maintenance and reliability (or physical asset management) field, including several conducted by Management Resources Group, Inc., have pointed consistently to an estimate that between 30 and 45 minutes per day per maintenance worker is wasted searching for spare parts because of poor catalog data integrity. Spare parts represent just one narrow area of the many aspects of physical asset management, but they provide a helpful example. (Incidentally, according to research presented in Maintenance Planning and Scheduling Handbook, by Richard (Doc) Palmer, the total amount of unproductive time on the part of an industrial maintenance worker is, on average, 5 hours and 45 minutes per day! That means that productive time, on average, is only 28%! Not all of that unproductive time is related to data integrity, but some of it certainly is.)

If you are not familiar with this aspect of physical asset management, be aware that inventory catalog descriptions are generally not formatted consistently or in a way that facilitates rapid searching and finding of the needed spare part. Searchers often become frustrated because they cannot easily find the part in question. Sometimes the time spent searching doesn’t even result in a successful find, let alone a rapid one. Typical problems include: the system indicates that the needed part is in stock, but when visiting the bin there are actually none in stock; the indicated bin location is wrong; the searcher spelled a search word differently than the myriad of ways it exists in the catalog material master records (e.g., Bearing, BRG, Brg).

Referring to the U.S. Department of Labor’s Bureau of Labor Statistics’ May 2006 Occupational Employment and Wage Estimates, it is estimated that the United States has approximately 5.45 million industrial maintenance workers today. The same data indicates that the mean hourly wage rate for these workers is approximately $20. A loaded cost including fringe benefits would be approximately $26 per hour.

If each of these workers is wasting conservatively 30 minutes per day searching for spare parts, then we are wasting 626,750,000 hours per year in the United States. That’s over 300,000 workers per year, or 5% of the industrial maintenance workforce. At the mean loaded cost per hour, that equates to $16,295,200,000 annually-$16.295 billion!

Are we suggesting that the primary manifestation of these potential gains is a reduction in head count? Not necessarily, although the natural attrition generated by the Baby Boomers’ retirements will present opportunities to reduce head count without having to lay off any workers.

In addition, you gain the real opportunity to redeploy the freed-up resources to more value-added activities that will drive higher equipment reliability and lower maintenance costs. The consensus of the expert community in asset management is that most industrial plants rely too heavily on time-based preventive maintenance (PM) procedures as a primary maintenance strategy. Based on the results of thousands of PM Optimization initiatives, approximately 60% of existing preventive maintenance activities in existence are inappropriate strategies for the assets in question. Thus, a very large portion of the maintenance workforce is engaged in low-value or zero-value work. Expert analysis of equipment failure behavior, using proven tools like Reliability Centered Maintenance (RCM) and Failure Modes and Effects Analysis (FMEA), dictates that the vast majority of assets in a typical industrial complex-about 89%-do not observe a predictable time-based failure pattern. Only about 11% of assets do so, as Figure 1 clearly shows.

 Uptime Article Fig 5


The failure curves depicted in Figure 1 are accepted and proven knowledge dating back to studies that began in the 1960s. Keep in mind that the curves in this figure show the probability of failure on the basis of time (the x-axis is time in these curves). What these curves tell us is that it is impossible to predict failures of 89% of the assets in a plant on the basis of time. That does not mean we cannot predict failure for these classes of assets-it simply means that we cannot do so on the basis of time.

If the failure behavior of a specific class of asset shows that the asset fails randomly on the basis of time, how can we accurately define an interval for preventive, or time-based, maintenance? We can’t! Yet that is exactly what we have tried to do for the past fifty years. Typically, we have guessed what the correct and safe time interval should be for preventive maintenance based on the actual historical failure behavior of that asset.

Consider an asset that over a 5-year period ran for 1 year before its first failure, then after repair it ran for 6 months before its next failure, then 3 months, then 18 months, then 5 months, then 16 months. What time interval would we set to do preventive maintenance on this asset if we wanted to prevent failure? If the asset is critical to operations, we’d have to take a risk-conservative approach and say that we should do something to this asset every 3 months. Based on the actual 5-year failure history of this asset, that would mean that we would have done preventive maintenance much too often.

Not only did the machine not need a PM during many of those runs, but as we can see from Figure 1, we may have introduced defects that actually induced failures that otherwise would not have occurred. This phenomenon is referred to in the reliability profession as Infant Mortality. Many people have probably heard the phrase “if it ain’t broke, don’t fix it.” Well, this adage has more merit than you would think.


As you can see in Figure 2, there is significant basis and proof, dating back to the 1960s, to support elimination of many existing PMs. This elimination would free up significant manpower, potentially to be used to hedge against the loss of knowledge with retiring Baby Boomers, or to redeploy to perform other more value-added tasks that would be required to enhance asset performance.

Most equipment does not observe a time-based failure pattern, Therefore, should we do no maintenance at all on the 89% of assets and simply wait for them to fail? Absolutely not. In fact, while we cannot predict failure for these assets on the basis of time, we most certainly can predict failure of these assets on the basis of condition-using a variety of sensitive technologies and tools designed to detect early warnings of impending failures. These sensitive technologies and tools are commonly referred to as Predictive Maintenance and Condition Monitoring. Examples of such tools include vibration analysis, infrared thermography, oil analysis, and ultrasonic inspection. There are others that we don’t need to go into here.

The trick to Predictive Maintenance (PM) optimization (reduction) and proper deployment of predictive maintenance tools is first to know how to categorize the assets, using analysis methods designed to understand likely and costly failure modes. Then, with that knowledge, review the existing preventive maintenance procedures. Eliminate those that either are not addressing failure modes or are applied to asset types that don’t observe any time-based pattern. Once these steps are undertaken, the appropriate PM strategies must be deployed. The result of this optimization of the maintenance program invariably results in a significant reduction in work, with the attendant reduction in labor and spare parts usage. In turn, these results drive significant cost savings and enhanced asset performance.

You may be asking yourself at this stage, “What does this all have to do with data integrity?” Well, how can you possibly accomplish this optimization if your foundational data sources lack integrity and quality-i.e. are incomplete, inconsistent, and inaccurate? If you don’t have an accurate and complete equipment list, for example, you lack a fundamental prerequisite to unlocking these technical benefits. The answer is that without asset data integrity you cannot accomplish the optimization described here, particularly if you want to do so both efficiently and effectively.

Consistency or Lack Thereof
Most corporations have allowed different industrial plants in the company’s asset fleet significant autonomy in choice and use of systems, formatting of master foundational data in those systems, maintenance strategies, etc. It is typical today that multiple plants in one corporation have similar, if not identical, assets, yet these assets are described differently from plant to plant. The maintenance strategies that are deployed for these assets also vary dramatically from plant to plant. Wide variation in maintenance strategy across a fleet of like assets results in a corresponding variation in the operating performance of these similar assets. Some assets operate more reliably, whereas other assets of similar or identical class operate unreliably.

Based on our knowledge of best practices, why would we allow this in any company? Wouldn’t we want to use sound analytical methods to classify our assets, analyze their failure modes, and apply somewhat consistent maintenance strategies across the enterprise (taking into consideration that some differences are warranted given operating context, etc.)? It seems logical and makes common sense to want to do so. But how can we undertake these steps efficiently if our assets are not described with a consistent taxonomy across the enterprise? Once again, we can’t.

For those who may not be familiar with the term “taxonomy,” it refers to the system of classification that guides the consistent formatting and nomenclature assignment used to describe whatever is being classified. A consistent taxonomy allows you to identify the like assets across the fleet and then measure and solve for the variation. An inconsistent taxonomy seriously impairs your ability to optimize your asset maintenance strategies and achieve consistent, reliable operation of your assets across the fleet. At the most basic level, this is a data integrity issue that must be solved in order to tap into the potential cost savings and improved asset performance that are waiting to be unlocked. Without data integrity, a significant entitlement of business benefits is locked away and unattainable.


Asset Data Integrity Is Serious Business

by Robert S. DiStefano and Stephen J Thomas


Asset Data Integrity Is Serious Business

by Robert S. DiStefano and Stephen J Thomas


  • The Business Case for Data Integrity
    • Introduction to the Business Case
    • Information Overload
    • Searching for Data
    • Retiring Baby Boomers
    • The Brain Drain
    • A Business Case Example
    • Consistency or Lack Thereof
    • The Data Integrity Corporate Entitlement
    • Impact on Shareholder Value
    • Overview
    • Who Are The Stakeholders?
    • Why We Wrote This Book
    • Who Will Benefit?
    • What You Will Learn
    • Chapter Synopsis
    • Let's Get Started
    • Defining the Terms
    • Data Elements
    • Taxonomy and Why Is It Important?
    • What We Are Looking for in Good Data
    • The Downside of Poor Data Integrity
    • A Word About Information Technology
    • Understanding Data Is Just the Beginning
    • About Life Cycles
    • The Asset Life Cycle
    • The Asset Data Life Cycle
    • Why the Data Life Cycle is Important
    • Roles and Responsibilities Within the Asset Life Cycle
    • It Is Never To Soon To Start
    • Life Cycle Links
    • Life Cycles as a Foundation
    • Task vs. Strategic
    • The Data Integrity Transform
    • Data Integrity Tasks
    • Reactive Data Integrity
    • Proactive Data Integrity
    • From Reactive to Proactive
    • Indirect Impacts
    • Decisions Are Just the Beginning
    • Indirect Inputs
    • Indirect Outputs
    • The Legal Umbrella
    • Indirect Aspects of the Transform
    • External Issues
    • Outcomes and Impacts - Partners
    • Outcomes and Impacts - Suppliers
    • Outcomes and Impacts -Customers
    • Outcomes and Impacts -Agencies
    • Outcomes and Impacts - Public
    • Outcomes and Impacts - Insurance Carriers
    • The External Impacts Are Important
    • The Implication for IT
    • Implications to IT of a Modern Asset Data Management Practice
    • The Advent of ERP Systems
    • Master Data Management
    • The Future
    • Historical View
    • What Is an Asset?
    • Asset Classification
    • Static Data vs. Dynamic Data
    • The Differences Among Assets, Functional
    • Locations and Functional Location Hierarchies
    • Other Asset-Related Master Data
    • Asset Master Data Structure and Formatting
    • Ideal Asset Data Repositories
    • Enterprise-Level vs. Plant-Level Asset Data Integrity
    • The Model For Material
    • What Is a Spare Part?
    • Items Classification
    • Static Data vs. Dynamic Data
    • Ideal Item Data Repositories
    • Enterprise-Level vs. Plant-Level Item Data Integrity
    • Data Quality Dimensions - The Beginning
    • The Approach to the Assessment
    • The Initial Steps
    • The Assessment-General Comments
    • The Assessment Process
    • Moving Forward
    • Similar But Different
    • Assessing Asset Data
    • Assessing Material Data
    • Data Strategy Session
    • To-Be Taxonomy
    • Primary Data Fields
    • Class and Subclass
    • Manufacturer or Supplier Name
    • Asset-Model Number or Serial Number
    • Material Items-Manufacturer or Supplier Part Number
    • Attribute Templates
    • Other Asset Data Fields
    • The Goal-Quality Data for the Future
    • After the Assessment
    • Data Repair is Far from Simple
    • Repair Problems
    • Data Repair Strategies
    • The Big Bang Approach
    • Fix It As You Go
    • The Line in the Sand-More on Sustainability
    • Commitment to Doing the Work
      • Data Governance - Insight to the Problem
      • Shifting the Burden
      • The Long Term Solution
      • The Benefits of Data Governance
      • The Jobs of Data Governance
      • It's All About Policy and Controls
      • Roles and Responsibilities
      • When Should We Start?

    • PART 3: SUSTAINING WHAT YOU HAVE CREATED 
      Data Governance 

      Sustaining What Has Been Created 
    • The Need to Sustain
    • Establishing Ownership
    • Communication
    • Process and Procedures
    • Training
    • Prepare for Data Growth
    • Walking the Walk
    • Quality Control and Quality Assurance
    • Using Key Performance Indicators
    • The Continuous Improvement Cycle
    • Sustainability Is Not Optional
    • Getting Started
  • The Business Case for Data Integrity


    PART 1: UNDERSTANDING THE IMPORTANCE OF ASSET DATA INTEGRITY 
    Plant Asset Information - A Keystone for Success 

    What is Data Integrity? 

    The Asset / Data Integrity Life Cycle 

    Data Integrity at the Task Level 

    Internal Outcomes and Impacts 

    External Outcomes and Impacts

    Information Technology (IT) Problems and Solutions 

    PART 2: BUILDING A SOUND DATA INTEGRITY PROCESS
    Building an Enterprise-Level Data Integrity Model 

    Building an Enterprise-Level Inventory Catalog Data Integrity Model 

    Data Integrity Assessment 

    Assessment Details-Assets and Material Items 

    Asset Data Clean-Up and Repair 

    Data Integrity Is Serious Business 

    Bibliography 
    Index