Spurred by a regulatory environment focused on corporate-data accountability and transparency, the traditionally sleepy subject of storage management has burst to the forefront of CIO concerns in recent months. Information life-cycle management ILM- has been thoroughly hyped as a better way for CIOs to align storage costs and deal with regulatory compliance.
ILM does indeed hold promise. Because it s content-aware, it allows organizations to automate their document and data-retention policies. And although compliance efforts are typically seen as a cost drain, ILM implementations can often provide a net savings by optimizing the use of expensive storage devices and ensuring that data is stored on the least-expensive media appropriate for its usage patterns. Within the past 18 months, it seems that just about every enterprise vendor has unveiled ILM products. The result is a sometimes bewildering array of disparate functionality painted with the ILM label, effectively blurring the concept and confusing many CIOs. This is clearly counterproductive, because IT organizations will benefit by complying with regulations such as Sarbanes-Oxley, improving data accessibility, and lowering storage costs. The best definition of ILM is provided by the Storage Networking Industry Association SNIA-. It says that information life-cycle management is comprised of the policies, processes, practices, and tools used to align the business value of information with the most appropriate and cost-effective IT infrastructure from the time information is conceived through its final disposition. Information is aligned with business processes through management policies and service levels associated with applications, metadata, information, and data. While a bit long, the definition makes a few key points. First is the fact that the tools component?the technology?comes last in the list of what comprises the ILM solution for enterprises. Data storage is still perceived to be primarily an IT concern, and technology alone won t provide a comprehensive storage-management program. Currently, IT organizations use technology approaches for three types of data: enterprise, E-mail, and office documents. IT organizations tend to act as the data owners for enterprise data from ERP, CRM, and other large mission-critical applications. The result has generally been positive?this class of data tends to be of high quality, and it s well protected from loss and unauthorized access. However, typical backup and retention mechanisms are an all-or-nothing affair. The implications often aren t well-understood by the business entities that bear primary regulatory responsibility for the data. In particular, it can be difficult and expensive to locate the details of specific relationships and transactions over a period of time. The size of E-mail in-boxes and folders is almost universally managed at the server level. Users are frequently encouraged to use offline archives to retain messages and are rarely constrained from doing so. Financial-services firms, in particular, have implemented centralized capture of E-mail, but they only infrequently filter or otherwise act on the content of the message. Many companies publish message-retention policies, but these are irregularly enforced?particularly as they relate to disposition?potentially resulting in high discovery costs and occasional surprises. Businesses store office documents and miscellaneous files on users local drives?typically without backup?or on departmental file servers. While file-server content is backed up, it s often left unmanaged apart from size. Available storage is usually limited only by the willingness of the department to bear the cost of disks and backup facilities. What s common among these approaches is that none of them demonstrates awareness of the document s content. This is unfortunate, because corporate management, auditors, regulators, and courts typically do. It s increasingly common for IT organizations to receive discovery requests for documentation related to a particular transaction or project. The differential treatment of data based on location makes responding to these requests difficult and time-consuming. Worse, it provides inconsistent availability of similar data. Source and full article: www.optimizemag.coma>