The rise of big data, business analytics and stricter compliance regulations necessitate that enterprises retain far more data than ever before. Against this backdrop, disk and cloud-based backup implementations are scaling quickly. In fact, many enterprises are dealing with secondary storage repositories that are ten times larger than their production storage repositories.
Especially for large enterprises with diverse data stores, tiered backup strategies are critical to meeting recovery service level agreements (SLAs) without breaking the bank; not all data is mission-critical to recover immediately in the event of an incident, and thus can live on a less expensive storage media.
As discussed in the video below, tape storage media is often perceived as a viable fit only for archive and long-term retention use cases today, but it can also play an important role in the backup hierarchy.
While backup remains an active use case for tape due to its value for fast site restores and anti-cybercrime, tape’s future growth opportunities lie in many new and emerging areas. With the Internet, cloud, big data, compliance and IoT waves promising unprecedented data growth, the timing for advanced tape functionality couldn’t be better.
Most storage environments consist of multiple platforms, vendors, and clouds creating a unique set of problems: lack of data visibility to enable tiering that aligns data value & storage costs, data locked in silos each with their own management tools, and complicated data protection.
In this video, Floyd Christofferson, CEO of Strongbox Data Solutions, explains how StrongLink Autonomous Engine™ with Data Insights and analytics quickly, dynamically, & seamlessly automates data movement and storage tiering across platforms, vendors and cloud. Data is managed in real-time to facilitate workflows & collaboration, SLAs and StrongLink Autonomous Engine QoS, while eliminating resource contention issues.
The selection of data storage technologies has never been more robust. Today’s choices range from ultra-high capacity, low cost storage at one end of the hierarchy to very high levels of performance and functionality at the other. These choices define the unique levels or tiers of today’s storage hierarchy.
The foundations of tiered storage had their beginnings over 30 years ago when disk, automated tape libraries and advanced policy-based data management software (HSM) combined to effectively migrate less-active data to less-expensive storage devices. At the highest level, tiered storage refers to an infrastructure capable of optimally aligning storage systems with application requirements and their required service levels. The business case for implementing tiered storage is compelling and becomes increasingly so as the storage pools get larger. Tiered storage integrates hardware and storage management software to provide a seamless operation for customers to realize the huge TCO and ROI benefits available today.
A tiered storage environment consists of two or more kinds of storage technologies, delineated by differences in four primary attributes: price, performance, capacity and functionality.
Get the Most Out of Your Isilon with Autonomous Data Management.
When an Isilon cluster fills up, IT is faced with deciding whether to buy more Isilon, move data to another platform, or archive/tier data off the Isilon. The challenge is figuring out how to maximize the value of the existing storage investments and reduce storage costs.
What if you could connect Isilon with other storage types into a cross-platform global namespace while reducing costs and improving business continuity and disaster recovery?
Stronglink automatically migrates data from Isilon to other storage platforms—including tape and public cloud—without interrupting user access and while maximizing the value of your existing storage investments.
In this Storage Switzerland video, Tab Butler, Senior Director of MLB’s Media Management and Post Production talks to storage analyst George Crump about how the MLB Network manages all of its data and why the organization views tape as a vital component of the process:
Have you ever watched a TV show featuring some dangerous activity and the warning comes up “Do not attempt this yourself?” It’s usually good advice and reminds me of an experience I had recently.
I had the pleasure of co-presenting at “The Reel Thing,” a part of the Association of Moving Image Archivists Conference in Portland, Oregon with Steve Kochak from Digital Preservations Laboratories. During my part of our presentation I was able to explain to the audience how an LTO cartridge is made and explained in detail each of the different components in a cartridge and what their functions are.
The magnetic tape data storage industry has withstood numerous challenges from its own past performance, from the HDD industry, and mainly from those who are simply uninformed about the major transformation the tape industry has delivered. Early experience with non-mainframe tape technologies were troublesome and turned many data centers away from using tape in favor of HDDs. Mainframe tape technology was more robust. Many data centers still perceive tape as mired in the world of legacy tape as a result. However, this view is completely out of date.
In this new white paper, Fred Moore, president of Horison Information Strategies, explains why it’s time to take advantage of the many benefits tape can bring to your storage infrastructure.
In a recent Storage Switzerland blog, Lead Analyst George Crump talks about how, because IT is perpetually working to lower both capital and operating expenses associated with backup storage infrastructure, backup workloads are common targets for migration to the cloud. However, this is not necessarily the most effective strategy for optimizing cost efficiencies.
In this video, he talks with IT consultant Brad Johns about why IT organizations should holistically evaluate the total cost of ownership (TCO) of their backup storage infrastructure, as opposed to focusing solely on immediate costs such as upfront infrastructure acquisition.
I just spent a full day at a meeting of the Active Archive Alliance and as I was flying home it occurred to me that it’s time for data storage managers to rise up from the sleepy status quo of buying more disk arrays to address runaway data growth problems. It’s time to wake up and smell the sweet aroma of freshly made modern data tape (sort of like that new car smell if you don’t know).
Why do that you ask? Because best practices and undeniable facts say so. Consider the following:
Data goes through a lifecycle from hot to cold, that is to say from a period of active use to a period of inactivity. This can happen in as little as 30 days or less.
Inactive data should not stay on primary storage devices. It takes up space on expensive storage media, consumes more energy and adds to the backup burden.
Usage of Cookies