ExaGrid’s New DeltaZone Deduplication Architecture Combines Generic Flexibility with GRID Scalability for Disk-Based Backup with Deduplication Environments
London — October 20, 2010 — ExaGrid Systems, Inc., the leader in cost-effective and scalable disk-based backup solutions with data deduplication, today introduced DeltaZone™ Deduplication, the first deduplication algorithm that allows for both generic and content aware deduplication while maintaining performance scalability. For the first time, with DeltaZone Deduplication, customers can now utilize generic byte-level or content-aware byte-level deduplication along with GRID scalability. ExaGrid’s enhanced GRID architecture with DeltaZone Deduplication has been made available to customers without requiring any changes to their existing environments.
Historically, customers have had to choose between byte-level deduplication with content awareness and scalability versus the generic approach of block-level deduplication with limited, disk-only scalability. Now, with ExaGrid’s DeltaZone, the customer can have generic deduplication, content aware deduplication and GRID Scalability all in a single product, from a single vendor.
“Customers looking for faster, more reliable backups and restores and scalability without forklift upgrades as data grows have typically run into a tradeoff when making the decision to purchase a disk backup with deduplication solution. It has often boiled down to a choice between the unbeatable scalability of byte-level dedupe and the generic approach offered by a block-level approach,” added Marc Crespi, vice president of product marketing for ExaGrid. “With the introduction of DeltaZone, we’ve eliminated this tradeoff. Customers using DeltaZone can expect best-in-class flexibility and scalability in one plug-and-play disk backup with deduplication appliance from ExaGrid – and we offer this at a price that is comparable to tape.”
The launch of DeltaZone Deduplication, combined with ExaGrid’s GRID architecture, makes ExaGrid the only vendor to offer content-aware value for backup applications in a fully scalable system. Customers can also leverage ExaGrid’s content-aware reporting for their major market share backup application, as well as extending these benefits to data protection utilities, home grown scripts and archiving/near-line applications.
“Scalability and flexibility are factors when considering the long-term viability of a deduplication approach. The hash index for a generic block-level deduplication approach could outgrow memory and introduce scalability limitations, but the generic approach is able to support any backup application. Byte-level delta deduplication approaches don’t require a hash table, but need content awareness of the backup application’s data stream,” said Lauren Whitehouse, senior analyst, Enterprise Strategy Group, Inc. “ExaGrid’s DeltaZone Deduplication cracks the code on these tradeoffs by offering a flexible generic deduplication approach that delivers scalability.”
ExaGrid’s disk backup system with data deduplication supports the industry’s leading backup applications and utilities, including CA ARCserve, CommVault Simpana, IBM System i platform (AS400 or iSeries), HP Data Protector, Linux/Unix dumps, Microsoft SQL dump, Oracle RMAN dumps, Symantec Backup Exec, Symantec NetBackup, Veeam, Vizioncore vRangerPro and VMware Backup. With DeltaZone, ExaGrid will be supporting even more backup applications and archive applications in the future.
ExaGrid disk backup systems are designed to meet the needs of companies whose primary storage is between 1TB and 100TBs of data. ExaGrid’s unique approach to disk-based backup delivers unparalleled performance and scalability without requiring costly forklift upgrades as data grows. ExaGrid customers achieve the fastest backup times because data is written directly to disk and data deduplication is performed post-process after the data is stored. In addition, ExaGrid’s GRID scalability enables organizations to store up to a 100TB full backup, plus weeks of retention, resulting in logical storage of petabytes of data. Performance scales with data growth since processing power, memory and bandwidth are added along with storage capacity, and data loads are automatically balanced across all servers.