In this video Brendan Sullivan, Fred Moore and Chris Dale discuss the IT environments that have existed over the past 30 years that have resulted in the mountains of unstructured data being backed up or archived on a plethora of different tape and data formats, and the reasons why the vaulting strategies created at the time do not serve the legacy data issues of today.
Archival data is piling up faster than ever as organizations are quickly learning the value of analyzing vast amounts of previously untapped digital data. Industry studies consistently find that the vast majority of all digital data is rarely, if ever, accessed again after it is stored. However, this is changing now with the emergence of big data analytics made possible by Machine Learning (ML) and Artificial Intelligence (AI) tools that bring data back to life and tap its enormous value for improved efficiency and competitive advantage.
The need to securely store, search for, retrieve and analyze massive volumes of archival content is fueling new and more effective advancements in archive solutions. These trends are further compounded as an increasing number of businesses are approaching hyperscale levels with significant archival capacity requirements.
For over five decades, CERN has used tape for its archival storage. In this Fujifilm Summit video, Vladimir Bahyl of CERN explains how they increased the capacity of their tape archive by reformatting certain types of tape cartridges at a higher density.
By Ken Kajikawa,
OEM Technical Support Manager
FUJIFILM Recording Media U.S.A., Inc.
Did you know 96,000 petabytes (PB) of total compressed tape capacity shipped in 2016? To put that into perspective, that’s over 326,000 years of 24/7 Full HD video! But why do so many companies depend on tape if primary backup can be faster to disk or cheaper in the short-term to the cloud?
Usage of Cookies