Data integrity, a core concern for organizations adhering to compliance standards like HIPAA, necessitates robust storage strategies. Software RAID, while offering a cost-effective approach, often sacrifices speed for resilience, a trade-off addressed by hardware RAID controllers designed for optimal performance. However, certain implementations, particularly those prioritizing fault tolerance within environments managed by IT professionals, demand a raid solution that offers redundancy over performance. This 2024 guide explores these specific RAID configurations, dissecting their advantages and disadvantages in scenarios where data preservation outweighs raw processing power.
The Indispensable Role of RAID in Data Protection
In today’s hyper-connected world, data is the lifeblood of organizations. From financial institutions to healthcare providers, and even small businesses, the availability and integrity of data are paramount. The consequences of data loss or corruption can be devastating, leading to financial losses, reputational damage, and even legal repercussions.
The Rising Stakes of Data Availability
The digital transformation has made us increasingly reliant on immediate data access. Downtime is no longer an inconvenience; it’s a critical business risk. Imagine a hospital unable to access patient records or a financial institution unable to process transactions. The operational paralysis and potential for harm are substantial.
The exponential growth of data, coupled with the rising sophistication of cyber threats, further exacerbates these risks. Organizations must, therefore, adopt robust strategies to ensure continuous data availability and prevent data loss.
Understanding Fault Tolerance
At the heart of data protection lies the concept of fault tolerance.
Fault tolerance is the ability of a system to continue operating properly in the event of the failure of some of its components.
In essence, it’s about building systems that are resilient to failures and can withstand unexpected disruptions.
In system design, fault tolerance is achieved through redundancy, which involves duplicating critical components or functions to provide backup in case of a failure. This redundancy can take various forms, including hardware redundancy, software redundancy, and data redundancy.
RAID: A Cornerstone of Data Redundancy
Redundant Array of Independent Disks (RAID) has emerged as a cornerstone technology for providing data redundancy and protection. RAID employs various techniques to distribute data across multiple physical drives, allowing for data recovery in case of drive failure.
By striping, mirroring, or using parity, RAID ensures that data remains accessible even if one or more drives fail.
Different RAID levels offer varying degrees of redundancy, performance, and cost, catering to diverse needs and budgets. As such, understanding the principles and implementation of RAID is essential for any organization seeking to safeguard its valuable data.
Decoding RAID: Core Concepts of Redundancy
Having established the critical role of RAID in data protection, it’s essential to delve into the foundational concepts that make this technology effective. At its heart, RAID achieves redundancy through two primary mechanisms: parity and data mirroring. Understanding these concepts is crucial to grasp how RAID safeguards against data loss and ensures continuous operation.
Parity: Data Protection Through Intelligent Calculation
Parity is a method of data redundancy that uses mathematical calculations to ensure data integrity.
Instead of creating a complete duplicate of data, parity involves calculating a value based on the data stored on multiple drives. This parity data is then stored on one or more dedicated drives or distributed across all drives in the array, depending on the RAID level.
If a drive fails, the missing data can be reconstructed by performing calculations on the remaining data and the parity information.
This process is akin to solving an equation where one variable is missing, and parity provides the necessary information to determine the missing value.
How Parity Calculations Work
The most common parity calculation involves the exclusive OR (XOR) operation. XOR compares corresponding bits across multiple data blocks.
If the number of "1" bits is even, the resulting parity bit is "0." If the number of "1" bits is odd, the parity bit is "1."
This seemingly simple calculation allows for the reconstruction of a missing data bit.
For instance, in RAID 5, parity data is distributed across all drives. If one drive fails, the XOR operation is performed on the remaining drives to reconstruct the lost data on the failed drive. This process is done seamlessly without interrupting operations, providing high availability.
The Write Penalty: A Trade-Off for Data Protection
While parity offers a cost-effective means of data redundancy, it comes with a performance trade-off known as the write penalty.
Every time data is written to a RAID array using parity, the system must recalculate the parity information.
This process involves reading the existing data, performing the XOR calculation, and then writing both the data and the updated parity information. This results in more I/O operations compared to writing data without parity.
The write penalty can be particularly noticeable in RAID levels like RAID 5 and RAID 6, which rely heavily on parity calculations. Careful consideration of workload characteristics is therefore crucial to mitigate the write penalty impact. Choosing a suitable RAID level for a workload can significantly improve system efficiency.
Data Mirroring: Exact Copies for Ultimate Reliability
Data mirroring represents a more straightforward approach to redundancy.
It involves creating an exact, identical copy of the data on two or more drives. This method is primarily used in RAID 1.
Every write operation is performed simultaneously on all mirrored drives, ensuring that the data is always consistent across all copies.
Ensuring Immediate Failover
The key advantage of data mirroring is its ability to provide immediate failover in the event of a drive failure. If one drive fails, the system can seamlessly switch to one of the mirrored copies without any data loss or downtime.
This makes data mirroring ideal for applications that require high availability and cannot tolerate any interruption.
The trade-off, however, is the cost. Because every piece of data has to be replicated, effective storage capacity is significantly reduced.
Despite the cost, the level of protection against data loss makes data mirroring an invaluable strategy in critical environments. By maintaining exact copies of data, organizations can achieve peace of mind, knowing that data integrity and accessibility are safeguarded at all times.
Exploring the RAID Levels: A Comprehensive Overview
Having established the critical role of RAID in data protection, it’s essential to delve into the foundational concepts that make this technology effective. At its heart, RAID achieves redundancy through two primary mechanisms: parity and data mirroring. Understanding these concepts is crucial to grasp how different RAID levels protect data and manage performance trade-offs.
Each RAID level represents a unique configuration, balancing data protection, storage efficiency, and performance characteristics. Selecting the appropriate level is paramount to aligning storage infrastructure with specific application requirements.
RAID 1 (Mirroring): Maximum Redundancy, Simpler Implementation
RAID 1, often referred to as mirroring, is a fundamental RAID level that duplicates data across two or more drives. Every piece of data written to the array is simultaneously written to all drives in the set, creating identical copies.
This approach provides the highest level of redundancy, as the array can withstand the failure of any single drive without data loss. Its simplicity also contributes to ease of implementation and management.
Typical Use Cases for RAID 1
RAID 1 is ideally suited for scenarios where data integrity and uptime are of utmost importance. Consider these situations where redundancy is paramount:
- Operating Systems: Mirroring the OS drive ensures system availability even if the primary drive fails.
- Critical Applications: Applications that cannot tolerate downtime, such as financial systems or medical records, benefit from RAID 1’s resilience.
- Small Databases: Smaller, frequently accessed databases can leverage RAID 1 for faster read performance and high availability.
Performance Implications of RAID 1
While RAID 1 offers excellent read performance (since data can be read from any drive in the mirror), its write performance can be a bottleneck. Because data must be written to all drives simultaneously, write speeds are effectively limited to the speed of the slowest drive in the array.
This write penalty is a trade-off for the superior redundancy it provides.
RAID 5 (Distributed Parity): Balancing Performance and Cost
RAID 5 employs a distributed parity scheme to balance performance, storage efficiency, and data protection. Parity data, which is used to reconstruct lost data in case of drive failure, is calculated and distributed across all drives in the array.
This distribution ensures that no single drive becomes a bottleneck for parity calculations, improving overall performance compared to RAID levels with dedicated parity drives.
Performance Trade-offs in RAID 5
RAID 5 offers good read performance, as data can be read from multiple drives simultaneously. However, write operations are more complex due to the need to calculate and write parity data.
This write penalty can impact performance, especially in write-intensive applications. Moreover, the rebuild process after a drive failure can be time-consuming, potentially affecting performance and increasing the risk of further data loss during the rebuild.
RAID 5 Suitability for Various Workloads
RAID 5 is well-suited for applications that require a balance between performance, storage capacity, and data protection. Consider these workloads:
- File Servers: RAID 5 provides adequate performance and redundancy for general-purpose file servers.
- Web Servers: Web servers that primarily serve static content can benefit from RAID 5’s read performance.
- Application Servers: Application servers with moderate write activity can utilize RAID 5 for data protection.
RAID 6 (Dual Parity): Enhanced Fault Tolerance
RAID 6 extends the data protection capabilities of RAID 5 by incorporating a second parity stripe. This means that RAID 6 can tolerate the failure of two drives simultaneously without data loss, providing enhanced fault tolerance.
Technical Details of Dual Parity
The dual parity implementation in RAID 6 involves calculating two independent parity sets for each data stripe. These parity sets are stored on different drives, ensuring that the loss of any two drives does not compromise the data.
Performance Characteristics Compared to RAID 5
While RAID 6 offers superior fault tolerance, it comes at the cost of slightly reduced write performance compared to RAID 5.
The additional parity calculation overhead increases the write penalty. Read performance, however, remains comparable to RAID 5.
RAID 10 (1+0): The Hybrid Powerhouse
RAID 10, sometimes denoted as RAID 1+0, combines the mirroring of RAID 1 with the striping of RAID 0. Data is mirrored across multiple drive pairs, and these mirrored sets are then striped together to improve performance. This hybrid approach provides both high performance and high redundancy.
Benefits for Mission-Critical Applications
RAID 10 is particularly well-suited for mission-critical applications that demand both high performance and high availability. Consider these benefits:
- High Throughput: Striping data across multiple mirrored sets provides excellent read and write performance.
- Superior Redundancy: The mirrored pairs ensure that the array can withstand multiple drive failures without data loss, as long as the failures do not occur in the same mirrored set.
- Fast Rebuild Times: Rebuilding a failed drive in a RAID 10 array is relatively quick, as it only involves copying data from the surviving drive in the mirrored pair.
Cost Considerations for RAID 10
The primary drawback of RAID 10 is its higher cost. Because data is mirrored, it requires twice the storage capacity compared to RAID 5 or RAID 6. This increased cost makes RAID 10 a premium option best suited for applications where performance and redundancy justify the investment.
Performance and Reliability Factors: Optimizing Your RAID Setup
Having explored the various RAID levels and their unique characteristics, it’s crucial to understand the factors that significantly influence the performance and reliability of RAID arrays. These factors, such as rebuild time, data integrity, and the write penalty, can impact the effectiveness of your RAID implementation. Optimizing these aspects is essential for maximizing data protection and ensuring consistent performance.
Rebuild Time: Minimizing Downtime After Drive Failure
The time it takes to rebuild a RAID array after a drive failure is a critical factor. A prolonged rebuild period leaves the array vulnerable to further data loss if another drive fails during the process. Several elements can influence rebuild time, including:
-
Drive Capacity: Larger drives inherently take longer to rebuild due to the sheer volume of data that needs to be reconstructed.
-
RAID Level: Some RAID levels, like RAID 5, require more complex parity calculations during the rebuild process, extending the duration.
-
Controller Performance: The processing power of the RAID controller significantly impacts the rebuild speed. A faster controller can expedite the data reconstruction process.
-
System Load: Rebuilding an array is resource-intensive. High system load from other applications can slow down the rebuild.
Strategies for Optimizing Rebuild Time
To minimize rebuild time and reduce the risk of data loss, consider these strategies:
-
Use Faster Drives: Solid-state drives (SSDs) offer significantly faster read and write speeds compared to traditional hard disk drives (HDDs), leading to a dramatic reduction in rebuild time.
-
Invest in a High-Performance RAID Controller: A robust RAID controller with dedicated processing power can handle the rebuild process more efficiently.
-
Schedule Rebuilds During Off-Peak Hours: Minimize system load during the rebuild process by scheduling it during periods of low activity.
-
Implement Hot Spares: A hot spare drive automatically replaces a failed drive, initiating the rebuild process immediately and reducing the window of vulnerability.
Data Integrity: Preventing Silent Data Corruption
Data integrity is paramount in any storage system. Silent data corruption, where data is altered without any apparent error messages, poses a significant threat. RAID systems are not immune to this issue. Error detection mechanisms and data validation techniques are crucial to protecting against silent data corruption.
Error Detection Mechanisms
RAID systems employ several error detection mechanisms, including:
-
Parity Checking: Parity data allows the system to detect and correct single-bit errors.
-
Checksums: Checksums are calculated for each data block, enabling the system to verify data integrity.
-
SMART (Self-Monitoring, Analysis and Reporting Technology): SMART monitors drive health and can provide early warnings of potential failures.
Data Validation Techniques
In addition to error detection mechanisms, proactive data validation techniques are essential:
-
Regular Data Scrubbing: Data scrubbing involves reading all data on the array to verify its integrity and correct any errors.
-
End-to-End Data Protection: Implementing data integrity checks from the application layer to the storage layer ensures that data remains consistent throughout its lifecycle.
-
Using RAID Controllers with Advanced Error Correction: Some advanced RAID controllers can identify and correct a wider range of errors.
Understanding the Write Penalty: Impact on Performance
The write penalty is a performance overhead associated with certain RAID levels, particularly those that use parity (e.g., RAID 5 and RAID 6). When data is written to these RAID arrays, the controller must also calculate and write parity information, which increases the write latency.
The write penalty is most pronounced in RAID 5, where each write operation typically requires four I/Os (two reads and two writes). RAID 6, with its dual parity, can further exacerbate this penalty.
Mitigation Strategies
Several strategies can mitigate the impact of the write penalty:
-
Choose the Right RAID Level: Consider the workload characteristics when selecting a RAID level. For write-intensive applications, RAID 10 or RAID 1 might be a better choice than RAID 5 or RAID 6.
-
Use a Write-Back Cache: A write-back cache allows the controller to acknowledge write requests before the data is actually written to the drives, improving write performance. However, this can pose a data loss risk during power failures. Consider using battery backup units (BBUs) for the RAID controller in this case.
-
Optimize Disk Alignment: Proper disk alignment can improve write performance by ensuring that write operations align with physical disk sectors.
-
Use Solid State Drives (SSDs): Due to their architecture, SSDs are significantly faster than traditional hard drives. Utilizing SSDs can largely eliminate the write penalty.
RAID Implementation and Management: Hardware, Software, and Hot Spares
Having explored the various RAID levels and their unique characteristics, it’s crucial to understand the factors that significantly influence the performance and reliability of RAID arrays. These factors, such as rebuild time, data integrity, and the write penalty, can impact the effectiveness of your chosen RAID configuration. Successfully navigating these practical aspects is essential for maintaining optimal data protection and system uptime.
RAID Controllers: The Crossroads of Hardware and Software
The choice between hardware and software RAID controllers represents a fundamental decision point in RAID implementation. Each approach offers distinct advantages and disadvantages that must be carefully weighed against specific requirements and constraints.
Hardware RAID: Dedicated Power and Performance
Hardware RAID controllers are dedicated processing units specifically designed to manage RAID operations. They operate independently of the host system’s CPU, offloading the computational burden and generally resulting in superior performance, particularly in write-intensive workloads.
These controllers typically feature dedicated cache memory to further accelerate I/O operations. Their key advantage lies in their self-contained nature, providing a consistent and reliable RAID implementation across different operating systems.
However, hardware RAID solutions often come with a higher upfront cost and may limit flexibility in terms of configuration options. Vendor lock-in can also be a concern, as replacing a failed controller may require sourcing a specific model.
Software RAID: Flexibility at a Cost
Software RAID, conversely, relies on the host system’s CPU and operating system to manage RAID functions. This approach eliminates the need for a dedicated hardware controller, reducing initial costs and offering greater flexibility in terms of RAID level selection and configuration.
Software RAID can be easily implemented on commodity hardware.
However, the dependence on the host CPU can lead to performance degradation, especially under heavy load. The CPU must dedicate resources to RAID operations, potentially impacting other applications and system responsiveness.
Furthermore, software RAID’s compatibility is inherently tied to the operating system. This can create challenges during system upgrades or migrations.
Monitoring and Maintenance: Proactive Error Detection
Implementing RAID is not a "set it and forget it" endeavor. Consistent monitoring and proactive maintenance are crucial for identifying potential problems before they escalate into data loss events.
The Importance of Monitoring Tools
A robust monitoring system is essential for detecting anomalies, tracking drive health, and assessing overall RAID performance.
Monitoring tools provide real-time insights into various parameters, including drive temperature, SMART attributes, RAID status, and I/O throughput. These insights enable administrators to proactively identify potential issues, such as impending drive failures or performance bottlenecks.
Alerting mechanisms should be configured to notify administrators of critical events.
Identifying Performance Bottlenecks and Optimizing RAID Performance
Beyond error detection, monitoring tools can also help identify performance bottlenecks within the RAID array.
Analyzing I/O patterns, disk utilization, and latency metrics can reveal areas where performance can be improved. Possible optimizations include:
- Adjusting RAID level
- Optimizing stripe size
- Upgrading to faster drives
- Implementing caching strategies
Regular performance audits are crucial for ensuring that the RAID array continues to meet the evolving needs of the system.
Hot Spares: Automated Failover for Continuous Operation
Hot spare drives represent a critical component of a resilient RAID implementation. A hot spare is an idle drive that is automatically brought online to replace a failed drive within the array.
Ensuring Uninterrupted Operations
The presence of a hot spare significantly reduces downtime by automating the rebuild process. When a drive fails, the hot spare is immediately activated, and the RAID controller begins rebuilding the data onto the new drive.
This minimizes the window of vulnerability during which the array is operating in a degraded state. The automated failover ensures continuous operation, minimizing disruptions to critical applications and services.
The Cost of Preparedness
While hot spares add to the initial cost of the RAID array, the benefits of enhanced availability and reduced downtime far outweigh the expense in many scenarios.
The decision to implement hot spares should be based on a careful assessment of the organization’s tolerance for downtime and the criticality of the data being protected. In mission-critical environments, hot spares are an indispensable element of a robust data protection strategy.
RAID in Diverse Storage Solutions: From NAS to the Cloud
Having explored RAID levels and their unique characteristics, it’s crucial to understand how these technologies are implemented across diverse storage solutions. From the home office to massive cloud data centers, RAID or RAID-like implementations play a critical role in ensuring data availability and integrity. This section explores how RAID adapts to the specific needs of NAS devices, enterprise SAN environments, and the infrastructures of leading cloud storage providers.
NAS (Network Attached Storage) Devices: RAID for Home and Small Business
Network Attached Storage (NAS) devices have become indispensable for home users and small businesses seeking centralized storage and data protection. RAID is a cornerstone of NAS functionality, providing redundancy against drive failures, which is especially important for users who may lack dedicated IT support.
NAS devices commonly support several RAID levels, including RAID 1 for mirroring, RAID 5 for distributed parity, and RAID 10 for a combination of mirroring and striping. The choice of RAID level depends on the user’s priorities, balancing capacity, redundancy, and performance.
Popular NAS vendors like Synology, QNAP, and ASUSTOR offer user-friendly interfaces for configuring and managing RAID arrays, simplifying the process for non-technical users. These interfaces typically include features for monitoring drive health, initiating rebuilds after a drive failure, and setting up automated backups.
SAN (Storage Area Network) Devices: Enterprise-Level RAID
Storage Area Networks (SANs) are designed to provide high-performance, block-level storage for enterprise applications. In SAN environments, RAID is typically implemented at the hardware level, using dedicated RAID controllers to offload processing from the host servers.
These controllers often support advanced RAID levels, such as RAID 6, which offers enhanced fault tolerance by using dual parity. SANs also incorporate sophisticated features like hot spares, automatic failover, and remote replication to ensure continuous data availability.
The selection of RAID levels in SANs depends on the specific application requirements, balancing the need for high throughput, low latency, and robust data protection. Factors such as database size, transaction volume, and recovery time objectives influence the optimal RAID configuration.
Cloud Storage Providers: Redundancy at Scale
Cloud storage providers face the challenge of managing massive amounts of data while ensuring high availability and durability. While they may not always use traditional RAID in the strictest sense, they employ similar redundancy techniques to protect against data loss.
These techniques often involve distributing data across multiple storage devices and geographical locations. Cloud providers utilize erasure coding, object replication, and other advanced methods to achieve redundancy levels exceeding those of traditional RAID arrays.
The specific implementation details are often proprietary, but the underlying principle remains the same: to safeguard data against hardware failures, natural disasters, and other unforeseen events.
Unaltered Redundancy RAID (URR): Dedicated Redundancy System
Unaltered Redundancy RAID (URR) represents a somewhat different approach to data protection. In a URR system, data is written to one or more physical volumes without any modifications or transformations, thus creating an exact copy or duplicate for redundancy purposes.
This method is particularly valuable in scenarios requiring a verified and unmodified data copy, which could be due to archival purposes or compliance requirements. The primary advantage of URR is its simplicity and transparency, allowing for easy data recovery and verification. However, it also typically carries higher storage costs compared to other RAID levels due to its direct duplication strategy.
Beyond RAID: Backup and Disaster Recovery Strategies
Having explored RAID levels and their unique characteristics, it’s crucial to understand that RAID should not be considered a comprehensive data protection solution. While RAID provides redundancy and uptime in the event of drive failure, it is only one piece of a more extensive data security puzzle.
Relying solely on RAID without implementing robust backup and disaster recovery strategies is akin to securing your home with only a strong front door. You might deter some threats, but you remain vulnerable to various other risks. This section highlights why RAID is not a substitute for backups and disaster recovery planning, emphasizing the importance of comprehensive strategies to safeguard data against a wide array of potential disasters.
RAID is Not a Backup: The Crucial Importance of Backup Strategies
RAID excels at providing high availability. This means that in the event of a drive failure, your system can continue operating with minimal disruption. However, RAID does not protect against data loss scenarios such as accidental deletion, data corruption, or, increasingly, ransomware attacks. These threats can compromise data across all drives in a RAID array, rendering the redundancy offered by RAID essentially useless.
A robust backup strategy is paramount. It involves creating copies of your data and storing them in a separate location, isolated from the primary system. This ensures that even if the primary system is compromised, you have a clean copy of your data to restore.
Backup strategies should incorporate the 3-2-1 rule:
- 3 copies of your data.
- 2 different storage media.
- 1 offsite copy.
This approach offers comprehensive protection against various failure scenarios.
We recommend implementing regular backups, both on-premise and cloud-based. On-premise backups provide quick recovery for minor incidents, while cloud-based backups offer protection against physical disasters, such as fire or flood, that could affect your primary site.
Disaster Recovery: Preparing for Large-Scale Failures
Disaster recovery (DR) planning takes data protection a step further. It involves preparing for large-scale failures that could disrupt your entire business. This could include natural disasters, cyberattacks, or major hardware failures.
A well-defined disaster recovery plan outlines the steps necessary to restore critical business functions as quickly as possible. This includes identifying critical systems, establishing recovery time objectives (RTOs), and implementing redundant systems and processes.
Disaster recovery strategies should include:
- Redundant Infrastructure: Replicating critical systems in a separate location.
- Failover Mechanisms: Automating the switch to backup systems in case of failure.
- Regular Testing: Conducting regular disaster recovery drills to ensure the plan is effective.
Investing in disaster recovery planning is an investment in the long-term resilience of your business.
Data Corruption: RAID as Only Partial Mitigation
Data corruption, often silent and insidious, poses a significant threat that RAID alone cannot fully address. While some advanced RAID implementations offer basic error detection, they do not prevent data corruption from occurring or spreading across the array.
Data corruption can stem from various sources, including hardware malfunctions, software bugs, and even cosmic rays. The result is often the same: data loss or system instability.
To mitigate data corruption risks, implementing end-to-end data integrity checks is crucial. This involves using checksums or other validation techniques to verify the integrity of data throughout its lifecycle. Regularly scanning your storage systems for errors and inconsistencies can help identify and address data corruption issues before they lead to significant problems.
RAID, while valuable for uptime and drive failure protection, offers only partial mitigation against data corruption. A holistic approach to data integrity is essential for comprehensive data protection.
Frequently Asked Questions
When is prioritizing redundancy over performance in RAID a good choice?
Prioritizing redundancy is ideal when data loss is unacceptable and downtime must be minimized, even if it means sacrificing speed. Consider this when crucial operational data or sensitive client information is involved, making a raid solution that offers redundancy over performance the sensible choice.
What are examples of RAID levels that focus on redundancy over performance?
RAID 1 (mirroring) and RAID 6 (dual parity) are classic examples. RAID 1 duplicates data across drives, while RAID 6 tolerates two drive failures. These configurations offer high data protection, illustrating a raid solution that offers redundancy over performance.
How does focusing on redundancy impact the cost of a RAID system?
Typically, prioritizing redundancy increases costs. Redundant RAID levels require more drives than performance-focused levels. The additional drives increase hardware costs, which can still be justified, especially if a raid solution that offers redundancy over performance is needed for important data.
What are the downsides of prioritizing redundancy in RAID?
Performance is the main trade-off. Write speeds are often slower compared to RAID levels designed for performance. This performance hit is the cost of increased data protection with a raid solution that offers redundancy over performance.
So, that’s the gist of prioritizing data safety over speed with RAID! Hopefully, this clears up the confusion and helps you choose the right setup. Remember, if keeping your data safe and sound is your top priority, exploring a RAID solution that offers redundancy over performance, like RAID 1, 5, 6 or 10, is definitely the way to go. Good luck with your storage adventures!