Skip to content

Protecting Your Data: Backup Best Practices for World Backup Day

31/03/2023
·
10 minutes read

Every organisation understands the importance of backing up its data to avoid losing it, but how robust is your data backup strategy and are you following data backup best practices?

Data is one of the most valuable assets for any organisation. Losing data due to hardware failure, cyberattacks, or natural disasters can have devastating consequences; unfortunately, many businesses set up their backup systems but don’t fully consider the processes and systems that are needed to ensure that the data is protected against threats, is easily accessible and fully recoverable.

Statistics show that 60% of data backups are incomplete and 50% of restores are unsuccessful.

In this world backup day blog post, we’ll discuss the data backup best practices, including on-premise backup, cloud backup and immutable backup, both onsite immutability and offsite immutability.

We’ll also cover the importance of regularly testing your backup and recovery strategies and the risks associated with poor backup processes and systems.

Here are our top 10 tips for data backup best practices:

  • Develop a backup strategy that fits your organisation’s needs and complies with industry regulations.
  • Use a combination of onsite and offsite backup solutions to ensure redundancy and protection against both online and physical risks.
  • Implement an immutable backup to ensure data integrity, especially for critical data.
  • Keep your backup systems up-to-date and monitor them regularly for hardware failures, software updates, and storage capacity.
  • Regularly test your backup and recovery systems to ensure that your data can be restored in case of a disaster or data loss.
  • Have notifications configured, and monitor your disk space capacity.
  • Clean away old data that is no longer required. Ensure your data retention policy also includes the deletion of old backup files.
  • Name your backup jobs using meaningful name conventions, for example, _copy for copy jobs, _cloud for cloud files, _incremental, so that they’re easily identifiable.
  • Have a sensible backup chain that includes regular full backups and periodic synthetic full backups otherwise you risk corrupted data on an incremental forever chain.
  • Consider your Recovery Time Objectives. RTOs represent the maximum allowable downtime after a disruption and should be used to inform decisions about backup frequency, retention policies, and disaster recovery strategies.

3-2-1-1-0 Backup Best Practice Rule

The 3-2-1-backup rule is a widely recognised backup best practices strategy that recommends that you should have at least three copies of your data, stored in two different formats, with one of those copies stored offsite.

The 3-2-1-1-0 backup rule is a variation of the 3-2-1 rule, adding an additional layer of protection:

  • 3 copies of your data: You should have three separate copies of your data, including the original data and two backup copies, so that if one copy is lost, you can still recover your data from the other copies.
  • 2 different formats: You should store your data in at least two different formats. For example, you could store one copy on an external hard drive and another copy in the cloud.
  • 1 copy offsite: You should store at least one backup copy of your data offsite or in a different physical location than your primary data and other backups. This provides protection against natural disasters, theft, or other events that could cause loss of all your data.
  • 1 offline immutable copy: This additional layer of protection suggests that one backup copy should be kept offline and disconnected from the network to protect against cyber-attacks or ransomware.
  • 0 errors: The final part of the 3-2-1-1-0 backup rule means that you should regularly test your backups to ensure they are error-free and can be easily restored if needed.

What backup solution is right for your business?

The choice of backup solution for an organisation depends on several factors, such as the size and complexity of the organisation, the type of data being backed up, the budget, and the availability of IT resources.

On-Premise Backup

Traditional on-premise backup solutions involve storing data on physical devices such as tape drives or external hard drives that are kept on-premises, with a copy sent offsite for storage. This method is still prevalent among many businesses due to its simplicity and cost-effectiveness. However, traditional onsite backup solutions alone have several disadvantages, including limited storage capacity, physical risks such as theft or fire, and the need for manual intervention, which can often result in human error.

On-premise backup solutions are suitable for organisations that have the necessary IT resources and infrastructure to manage and maintain backup systems in-house. On-premise backup offers greater control and customisation over backup processes, as well as faster backup and restore times. However, on-premise backup can be costly to set up and maintain and may not provide sufficient protection against large-scale disasters or cyber-attacks.

Cloud Backup

Cloud backup solutions are suitable for organisations that require scalability, flexibility, and ease of use in backup processes. Cloud backup offers greater redundancy and availability of backup data, as well as the ability to scale up or down to meet changing data storage needs, with lower upfront costs and pay-as-you-go pricing models, but Cloud backup solutions are not suitable for everyone, for example a cloud backup solution may not be suitable for organisations with slow or unreliable internet connections.

Whilst Cloud backup providers typically have robust security measures in place to protect against data loss, corruption, or theft, there are still data privacy and security concerns that need to be considered.

When using a cloud backup provider, your data is being stored on servers managed by third-party providers. This can raise concerns about the privacy of sensitive data, especially if the data is subject to any regulatory compliance requirements. Like everyone else, cloud backup providers can also be vulnerable to data breaches, if the cloud backup system has any vulnerabilities, such as unpatched software or weak passwords, an attacker may be able to exploit these vulnerabilities to gain access to the backup data.

Vulnerability in Veeam Backup & Replication – March 2023

At the beginning of March, leading backup provider Veeam announced that a vulnerability had been found in its Backup and Replication product, whereby unauthorised users, operating within the backup infrastructure network perimeter, were able to request encrypted credentials from the Veeam Backup Recovery (VBR) service and gain access to the backup infrastructure.

The vulnerability CVE-2023-27532, which was rated as high, with a CVSS score of 7.5, affects all Veeam Backup & Replication versions.

A patch was released for both current versions VBR 11 and 12, but any older installs would not have fixes available, meaning any Veeam users running an older version need to update to either VBR 11 or VBR 12 and apply the patches as a matter of urgency.

For further information and patch details please visit Veeam’s website

If you need any assistance with the upgrade of your Veeam environment please get in touch with us.

Hybrid Backup Environments

Many organisations choose to use a combination of both on-premise and cloud backup, having both provides redundancy and ensures that the data is always available, regardless of the location of the backup system or the status of the network.

Hybrid backup offers greater flexibility in backup processes and can provide faster backup and restore times, as well as greater redundancy and availability of backup data. However, hybrid backup may be more complex to set up and manage and may require additional IT resources and expertise.

Onsite backup can provide quick and easy access to data in the event of a disaster or outage, as the data is stored locally and can be easily restored. However, onsite backup may not be sufficient to protect against large-scale disasters or cyber-attacks, which may render local backup data inaccessible.

Having an additional, offsite, cloud backup can provide an extra layer of protection by storing backup data offsite, in geographically dispersed datacentres. This can help to ensure the availability and integrity of backup data in the event of a disaster or cyber-attack, as well as providing greater scalability and flexibility.

Immutable Backup

Immutable backup refers to a backup strategy that prevents any changes or modifications to the data once it’s backed up. The concept is simple; once the data is backed up, it cannot be changed or deleted, having an immutable backup is now considered to be one of the backup best practices.

There are two types of immutable backup strategies, onsite and offsite, we have implemented both within many of our clients, with some clients opting for 1 immutable copy, and other more regulated organisations, opting to have both onsite and cloud immutable copies in place.

Onsite Immutable Backup

Onsite immutability involves backing up data to a physical device that is stored on-site. This method provides quick access to the data in case of a disaster or data loss. Onsite immutability is also beneficial for organisations with limited internet bandwidth or connectivity issues.

However, onsite immutability has its weaknesses. For instance, it exposes the data to physical risks such as theft, fire, or water damage. It also requires periodic maintenance, including monitoring for hardware failures, updating software, and upgrading storage capacity.

Offsite Immutable Backup

Offsite immutability involves backing up data to a physical device that is stored offsite. This method provides an additional layer of protection against physical risks, such as natural disasters, theft, or vandalism. Offsite immutability is also beneficial for organisations with limited on-site storage capacity.

However, offsite immutability has its challenges. For instance, it requires a reliable internet connection, which can be costly, especially for organisations with large data volumes. Additionally, restoring data from an offsite location can take longer than restoring data from an onsite location.

Veeam Immutability

As a Veeam Managed Services partner, we have used Veeam to deliver an immutable backup copy in the cloud for many of our clients by using a feature called “Object Lock,” which allows for the creation of WORM (Write-Once-Read-Many) storage in supported cloud storage platforms. This means that once data is written to the storage, it cannot be modified, overwritten, or deleted until a specified retention period has passed.

Once Veeam has created a backup of the data, it is stored in a backup file, that backup file is then copied to the cloud storage repository that supports Object Lock, such as Amazon S3, Microsoft Azure Blob, or Google Cloud Storage. The Object Lock feature is enabled on the cloud storage repository, which creates a WORM storage for the backup file. Once the backup file is written to the WORM storage, it cannot be modified, overwritten, or deleted until the retention period has passed.

Veeam can still access the backup file and restore the data from it during the retention period, but it cannot modify the backup file. Once the retention period has passed, the backup file can be deleted or overwritten as needed.

By leveraging Object Lock in supported cloud storage platforms, Veeam can deliver immutable backup copies that protect against malicious or accidental modification or deletion of backup data, ensuring that the data remains intact and recoverable in the event of a disaster.

It is also worth noting that immutable backups may not be compliant with certain regulations or compliance frameworks, which may require that data be modifiable or deletable during the retention period.

Testing Backup and Recovery Strategies

Regularly testing your backup and recovery strategies is crucial for ensuring that your data can actually be restored in case of a disaster or data loss. It is also a good way to understand how long the recovery of your data will take.

We understand that day-to-day IT team workloads often stop an organisation from testing its backup procedures, but it is recommended that you do regular, random file recovery tests at least once every two-three months. This frequency can vary depending on the organisation’s data retention and recovery time (RTO) objectives, as well as any changes to the backup infrastructure or environment.

Full restore testing should be performed at least once a year, or after any significant changes to your IT infrastructure.

Testing should involve simulating a data loss scenario and attempting to restore the data using your backup and recovery systems. This process should involve testing both the backup process and the recovery process. For instance, you should verify that the backup was successful and that the data can be restored to its original state.

93% of companies that experience a major data loss and do not have a plan for recovery will be out of business in one year.

Poor Backup Processes and Systems

Poor backup processes and systems can have severe consequences, including financial losses, legal liabilities, and reputational damage. For instance, losing customer data due to a data breach can result in significant fines and loss of business. Additionally, failure to comply with industry regulations such as GDPR and HIPAA can result in legal liabilities.

Backup Alerts

It is important to keep track of backup alerts and jobs to ensure that disks are not filling up and backup chains are not broken. If disks are not monitored, backups may fail due to lack of space or cause backups to fail, which can result in data loss. Additionally, if the backup chain is broken, the backup data may be incomplete or corrupted, making it difficult or impossible to recover data in the event of a disaster or attack. Regularly monitoring backup alerts and jobs can help to prevent these issues and ensure the availability and integrity of backup data.

Take action on the warnings as eventually, they will become failures! Ignoring backup errors can lead to incomplete or corrupted backup data, making it impossible to recover data in the event of a disaster or cyber attack. Regularly checking backup logs and verifying backup jobs can help to identify and address any issues with the backup system before they become more serious and impact the availability and integrity of backup data.

Healthy Backup Chains

A sensible backup chain for incremental backups or synthetic fulls is crucial to ensure data integrity and availability in the event of data loss. While incremental backups and synthetic fulls are efficient, any corrupted or lost backup can render the entire chain useless. Regular full backups and periodic synthetic fulls mitigate this risk by capturing complete data and creating a backup without reading from the primary source. Testing backups and the restoration process can identify vulnerabilities in the backup chain and enable timely remediation.

Ultimately, a healthy, appropriately designed backup system, with sensible processes and monitored alerts, is essential for ensuring that your data can be restored in the event of an unexpected data loss or cyber-attack.

If you need help designing and deploying, or managing your backup processes, get in touch by completing the form below or by contacting us on 01932 232345.

Want to know more?

Contact us today to explore how our tailored solutions can align with your business priorities.

Share