ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Preparing to move and protect expanding AI workloads

AI workloads are critical in many areas. Christopher Rogers at Zerto, a Hewlett Packard Enterprise company, asks how these important applications can be kept running– and how the underlying data can be kept secure

 

Across industries and geographies, organisations are planning their AI strategies and recognising their potential to revolutionise every facet of our lives. It seems, on the face of it, the only limiting factor will be human imagination.

 

However, the massive quantity of data that AI utilises and creates will require a new era of network management and recovery solutions to cope with much greater volumes of information. Otherwise, innovation will be diminished by today’s technology infrastructures that are not up to the task.

 

Soon, AI workloads exponentially larger than ever before, for health services, financial institutions, and many other areas will need to operate efficiently 24/7 without downtime or slow performance. To ensure these critical applications are kept up and running, decision makers must ensure that appropriate infrastructure requirements are an integral part of their overall AI strategies.

 

The need for AI mobility

Data is at the crux of AI. All AI applications require substantial data, petabytes or even exabytes, for effective training of models and ongoing performance improvement. This data provides the basis for algorithms to learn patterns, correlations, and make intelligent decisions. Its quality, diversity and magnitude enable AI applications to recognise nuances, adapt to variations, and constantly refine responses.

 

However, the vast data sets that AI utilises were typically created at source and now reside in those same decentralised locations. For processing, these multiple and ever-growing data sets need to be moved into a single repository, and then at the end of their lifecycle, securely archived for potential re-training. 

 

Moving large amounts of information puts a tremendous strain on network resources and performance, and current synchronous technologies do not support efficient, real-time transfer of vast quantities of data. Instead, asynchronous or near-synchronous are better options as they ensure continuous replication on a block level at low bandwidth, without producing high peaks that overload the network.

 

Evaluating replication options 

To achieve the best combination of transfer speed and completeness for recovery, near-synchronous replication offers the capabilities of synchronous replication without demanding an optimal performing network or infrastructure. Near-synchronous replication works in the same way as asynchronous but also has similar qualities to synchronous replication as data can be written to multiple locations simultaneously. Always-on, it takes just seconds to replicate any changes between locations or the recovery site. 

 

Running continuously cuts out the requirement for replication scheduling or taking snapshots. With near-synch solutions, data written to the source storage is automatically replicated to the target storage without waiting for acknowledgement. This means there is no latency or impact on the source application which can continue operating as normal. As a result, it enables faster write speeds than synchronous replication, as well as ensuring a high level of data availability and protection. 

 

Balancing performance with data integrity makes near-synchronous replication well-suited to critical applications such as AI that rely on massive amounts of data and constant write loads.

 

Disaster recovery for AI workloads

It goes without saying that these gigantic workloads may need to be restored quickly in the event of an outage or security breach. The traditional backups that many organisations are using today are not geared to provide seamless business continuity and disaster recovery for critical AI applications.  

 

A major issue is that backups are taken periodically, so there is considerable potential for data loss between backups. Additionally, the restoration process can lead to long downtimes of days or even weeks as applications are not easily restored in working formats, but must be pieced back together manually from their component parts.

 

For AI, this approach may lack the granularity needed to reconstruct precise models, negatively impacting their accuracy when recovered. Furthermore, AI solutions are developing at a phenomenal speed, and depending solely on backups that might fail to capture evolving configurations and complexities could cause insurmountable problems when trying to restore them.

 

To support the resilience of critical AI applications, it is essential that organisations deploy solutions offering continuous data protection (CDP) so they can reduce potential data loss to a minimum.

 

Unlike traditional backups, which are scheduled at intervals, CDP ensures that data is backed up constantly. CDP automatically keeps a record of every write operation along with a timeline of when changes were made. In the event of data corruption, an outage or cyber-attack, users can restore their systems to a precise point in time in a matter of seconds with minimal data loss.  This is vital for systems where downtime is non-negotiable. 

 

Preparing for massive AI growth

Contending with massive volumes of data is inescapable in the AI world. Whether AI is used for entertainment or improving lives and the way we work, it’s transformative power cannot be overstated. It will support global infrastructures and economies, underpinning sectors like healthcare, finance, communications, logistics, and energy. To protect these critical applications, continuous data protection will be imperative.

 

However, with legacy technologies unable to provide the data mobility and security needed for huge data sets, organisations will need to adopt modern near-synchronous replication alternatives to accommodate the inexorable expansion of AI that’s already underway.

 


 

Christopher Rogers is Senior Technology Evangelist at Zerto, a Hewlett Packard Enterprise company

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543