It is a visitor put up by Satoru Ishikawa, Options Architect at Classmethod in partnership with AWS.
In April 2025, AWS introduced the deprecation of Amazon Redshift DC2 situations, guiding customers emigrate to both Redshift RA3 situations or Redshift Serverless. Redshift RA3 situations and Serverless undertake a design that separates storage and compute, affords new options comparable to knowledge sharing, concurrency scaling for writes, zero-ETL , and cluster relocation.
On this put up, we share insights from certainly one of our clients’ migration from DC2 to RA3 situations. The client, a big enterprise within the retail business, operated a 16-node dc2.8xlarge cluster for enterprise intelligence (BI) and ETL workloads. Going through rising knowledge volumes and disk capability limitations, they efficiently migrated to RA3 situations utilizing a Blue-Inexperienced deployment method, attaining improved ETL question efficiency and expanded storage capability whereas sustaining price effectivity.
Amazon Redshift structure varieties
Amazon Redshift affords two deployment choices: Provisioned mode, the place you select the occasion kind and variety of nodes and handle resizing as wanted, and Redshift Serverless, which mechanically provisions knowledge warehouse capability and intelligently scales the underlying sources. The next diagram compares these two structure varieties.

Provisioned clusters require you to find out cluster dimension upfront, however you’ll be able to optimize prices by buying Reserved Cases (RI) or scheduling pause and resume actions. Serverless mechanically provisions sources as wanted, with a pay-per-use mannequin the place you solely pay for compute sources consumed. Each companies help migration between one another and provide the identical options together with SQL, zero-ETL, and Federated Question capabilities. For particular pricing particulars, see Amazon Redshift pricing.
Provisioned clusters are appropriate for large-scale, predictable workloads and provide computerized scaling based mostly on queuing. Serverless gives management-free computerized scaling for variable workloads with AI-driven optimization that scales based mostly on workload complexity and knowledge volumes. For extra particulars, confer with Evaluating Amazon Redshift Serverless to an Amazon Redshift provisioned knowledge warehouse.
Buyer case research: Migration from DC2 situations
This part describes the shopper’s migration from Amazon Redshift DC2 to RA3 occasion varieties. The migration used a Blue-Inexperienced deployment method that minimized downtime whereas attaining each price optimization and efficiency enchancment.
The client’s workload had the next traits:
Use circumstances
The client had the next key use circumstances for his or her Amazon Redshift deployment:
- Question by way of BI software throughout enterprise hours
- Excessive quantity of learn queries
- Peak entry throughout Mondays and starting of months
- Information processing in early morning
- Concentrated write queries for knowledge loading and transformation
- Regular-state workload traits
- Run queries greater than 16 hours each day
Necessities
The client had the next key necessities for his or her Amazon Redshift migration:
- Efficiency
- Use auto-scaling (comparable to concurrency scaling) throughout peak entry durations
- Information dimension
- Disk capability enlargement wanted
- Value Administration
- Simple funds prediction and administration
- Make the most of low cost companies for long-term utilization
- Compatibility
- Preserve compatibility with current purposes and BI instruments
- Keep away from endpoint modifications
- Availability
- Most downtime of 8 hours acceptable throughout migration
- Community
- Don’t modify the prevailing 2-Availability Zone (AZ) subnet configuration
- When emigrate
- To be performed throughout low-load days and hours
- Deliberate downtime doable inside 8 hours
Key concerns in system design, implementation, and operation included prolonged operation hours, ease of funds prediction and administration, price optimization by way of Reserved Cases (RI), and sustaining compatibility with current programs (avoiding endpoint modifications). The client evaluated Amazon Redshift Serverless, which provided enticing options comparable to a pay-per-use mannequin, computerized scaling capabilities, and the potential for higher worth efficiency for variable workloads. Whereas each Redshift Serverless and provisioned clusters may successfully help their workload patterns, the shopper selected the provisioned mannequin with RA3 nodes, leveraging their years of operational expertise with provisioned environments, current RI technique, and established capability planning method.
Options of RA3 occasion kind
Constructed on the AWS Nitro System, RA3 situations with managed storage undertake an structure that separates computing and storage, permitting impartial scaling and separate billing for every part. These situations use high-performance SSDs for warm knowledge and Amazon S3 for chilly knowledge, offering ease of use, cost-effective storage, and quick question efficiency. For extra particulars, confer with Amazon Redshift RA3 situations with managed storage.
Migration conditions
The client had the next migration conditions in place:
- The client used a Redshift cluster with 16 nodes of dc2.8xlarge configuration.
- The client selected a Blue-Inexperienced deployment method for migration, the place they might restore from a snapshot to RA3 occasion kind, enabling fast rollback if vital.
- The client applied cluster switching and rollback by way of endpoint switching utilizing cluster identifier rotation.
- Moreover, to enhance efficiency with excessive concurrency, they transitioned the transaction isolation stage from SERIALIZABLE ISOLATION to SNAPSHOT ISOLATION.
Cluster migration strategies
There have been two migration choices obtainable: Elastic Resize and Traditional Resize.
Amazon Redshift’s Traditional Resize performance had been enhanced, for resizing to RA3 occasion varieties, considerably lowering the write-unavailable interval. Based mostly on PoC testing, after initiating the resize, the cluster’s standing was modifying for 16 minutes earlier than it turned obtainable. Based mostly on these outcomes, the shopper proceeded with the Traditional Resize method.
Cluster sizing
Sizing concerned figuring out the occasion kind and variety of nodes for the migration goal. Sizing factors thought-about workload traits comparable to CPU-intensive (queries utilizing excessive CPU), I/O-intensive (queries with excessive knowledge learn/write), or each.When migrating from DC2 occasion varieties, extra nodes is likely to be required relying on workload necessities. Nodes had been added or eliminated based mostly on the computing necessities for vital question efficiency.
Evaluating configurations with related cluster prices when it comes to occasion dimension and depend, for a dc2.8xlarge 16-node cluster, the advisable configuration was 8 nodes of ra3.16xlarge. The next was the associated fee comparability within the Tokyo Area:
- Really helpful: dc2.8xlarge 16-node cluster => ra3.16xlarge * 8-node cluster
- $97.52/h (6.095/h * 16 nodes) => $122.776/h (15.347/h * 8 nodes)
- Value-focused: dc2.8xlarge 16-node cluster => ra3.16xlarge * 6-node cluster
- $97.52/h (6.095/h * 16 nodes) => $92.082/h (15.347/h * 6 nodes)
For this migration, the shopper proceeded with a cost-efficient 6-node ra3.16xlarge cluster to remain inside current funds constraints. Nevertheless, since this node depend may face throughput limitations throughout sure instances, they enabled concurrent scaling for the RA3 occasion kind to deal with spike entry.
Concurrency scaling gives as much as 1 hour of free credit per day for every energetic cluster, accumulating as much as 30 hours. On-demand utilization charges apply when exceeding this free tier.Whereas the shopper selected to implement concurrency scaling, Elastic Resize to quickly improve nodes throughout peak hundreds was additionally thought-about however rejected on account of on-demand prices for added nodes and the transient disconnection interval throughout switching.
Managed storage price
RA3 situations use Redshift Managed Storage (RMS), which is charged at a set GB-month charge. The client’s roughly 2 TB of information required together with storage prices within the estimates. For pricing particulars, see Amazon Redshift pricing.
Migration step from DC2 to RA3
After creating an RA3 cluster from the DC2 cluster’s snapshot, the shopper swapped the cluster identifiers. The next diagram exhibits this course of.

- Take a snapshot of the present DC2 cluster.
- Restore RA3 cluster from the snapshot with a unique cluster identifier (Traditional Resize)
- Swap the cluster identifiers between the present DC2 cluster and the brand new RA3 cluster.
If any points come up after the cluster change, you’ll be able to rapidly roll again by returning the unique DC2 cluster to its authentic cluster identifier.
Observe: Restore from a snapshot
Operating the restore operation utilizing CLI instructions is advisable to attenuate operational errors and guarantee reproducibility. The next is a pattern command.
Manufacturing migration length
The time required for the restore and traditional resize steps can differ considerably relying on knowledge quantity and goal cluster specs. The client performed a rehearsal beforehand to measure the precise required time.
Take a look at outcomes
Earlier than the manufacturing migration, the shopper created a check cluster by restoring a snapshot to the RA3 occasion kind. Whereas Redshift Take a look at Drive is usually helpful for workload testing, this buyer confronted distinctive constraints: enabling audit logging of their manufacturing cluster would require configuration modifications, cluster restarts, and sophisticated approval processes below their strict change administration insurance policies. To handle this, they developed a customized load testing software that captured workload patterns utilizing Amazon Redshift system views (SYS_QUERY_HISTORY and SYS_QUERY_TEXT), which keep 7 days of question historical past. The software replayed 55,755 historic queries with 50-way parallelism towards each DC2 and RA3 clusters, evaluating metrics together with question execution time, CPU utilization, and disk I/O. Question consequence caching was disabled throughout testing to make sure correct comparisons.
BI question efficiency
BI queries had been examined utilizing the customized load testing software. The outcomes signify the typical execution time from 15 check runs of 55,755 queries executed with 50-way parallelism. With out concurrency scaling, the dc2.8xlarge 16-node cluster averaged 45.82 seconds per question, whereas the ra3.16xlarge 6-node cluster averaged 91.30 seconds. This indicated that RA3 situations confirmed longer execution instances for brief and medium queries in a direct migration with out optimizations. Nevertheless, enabling concurrency scaling improved RA3 efficiency progressively. With concurrency scaling enabled at most 2 clusters, the ra3.16xlarge 6-node cluster achieved a median of 72.48 seconds per question, a 21% enchancment over the non-scaled configuration.
| Node Sort / Variety of nodes | Common Question Time |
| ra3.16xlarge 6-node cluster | 72.48 seconds |
ETL question efficiency comparability
For long-running ETL queries (execution time higher than 10 minutes), the RA3 cluster demonstrated higher efficiency than DC2. These outcomes represented a direct migration of the shopper’s workload with no optimizations utilized.
- For the Giant-scale knowledge load workload 1, the ra3.16xlarge cluster accomplished the question 28% quicker than the dc2.8xlarge cluster (41 minutes vs. 57 minutes).
- For the Advanced transformation workload 1, the ra3.16xlarge cluster was 23% quicker (1 hour 1 minute vs. 1 hour 20 minutes).
These outcomes indicated that the RA3 node kind was extra performant for time-intensive knowledge loading and transformation duties. The upper CPU utilization values for RA3 instructed more practical compute useful resource utilization.
| Node Sort / Variety of nodes | Common Question Time | MAXCPU% |
| ra3.16xlarge 6-node cluster | 41 minutes 09 seconds | 11:45 |
| dc2.8xlarge 16-node cluster | 57 minutes 07 seconds | 10:85 |
| Node Sort / Variety of nodes | Common Question Time | MAXCPU% |
| ra3.16xlarge 6-node cluster | 1 hour 01 minutes 33 seconds | 74:23 |
| dc2.8xlarge 16-node cluster | 1 hour 20 minutes 36 seconds | 53:58 |
Efficiency tuning
Based mostly on the check outcomes, the shopper recognized that RA3 confirmed longer execution instances for brief and medium BI queries however quicker efficiency for long-running ETL queries in comparison with DC2. To optimize total efficiency, they centered on figuring out sluggish queries and steadily referenced tables, prioritizing optimizations with the very best impression.
Efficiency tuning technique
The client thought-about a number of optimization methods to leverage RA3’s architectural benefits. One key technique concerned pre-processing ad-hoc quick and medium question workloads throughout low-load durations, creating pre-processed tables or materialized views for queries that repeatedly carried out joins, aggregations, filters, and projections. RA3’s separated compute and storage structure, with cost-effective large-scale storage, supported this method.
Changing common views to materialized views
Evaluation of sluggish queries revealed the usage of joins in views, and steadily referenced tables had been being accessed a number of instances by way of these views. As a countermeasure, the shopper changed steadily used common views with materialized views, eradicating pointless knowledge ranges and redundant columns.
Amazon Redshift helps incremental updates of materialized view contents by way of the REFRESH MATERIALIZED VIEW command, enabling environment friendly knowledge updates.
Materialized views and question rewrite
By changing common views to materialized views, current queries could also be mechanically optimized by way of the “question rewrite” function supplied by the question planner. For extra particulars, confer with “Automated question rewriting to make use of materialized views“.
Automated tuning with AutoMV
On the DC2 cluster, disk utilization constantly exceeded 80%, which disabled the AutoMV function on account of inadequate disk house. With RA3’s expanded storage, computerized tuning by way of AutoMV turned doable, resulting in additional efficiency enhancements. For extra particulars about AutoMV, confer with Automated materialized views.
Efficiency tuning outcomes
After making use of these optimizations, the shopper achieved the next outcomes:
- Maintained current efficiency whereas controlling price will increase
- Achieved greater CPU utilization whereas sustaining throughput
- Enhanced dynamic throughput throughout peak load durations utilizing concurrency scaling’s computerized scaling
Conclusion
On this put up, you realized how a big retail enterprise efficiently migrated from Amazon Redshift DC2 to RA3 situations. The Blue-Inexperienced deployment method enabled a secure migration with fast rollback functionality, whereas the separated compute and storage structure of RA3 supplied flexibility to deal with rising knowledge volumes. Though RA3 confirmed completely different efficiency traits for brief BI queries in comparison with DC2, the shopper achieved vital enhancements in long-running ETL question efficiency (as much as 28% quicker for knowledge hundreds and 23% quicker for advanced transformations). By leveraging RA3-specific options comparable to materialized views and AutoMV, they optimized total question efficiency whereas sustaining price effectivity by way of Reserved Cases and concurrency scaling.
To proceed your RA3 migration journey, see Greatest practices for upgrading from Amazon Redshift DC2 to RA3 and Amazon Redshift Serverless and Resize Amazon Redshift from DC2 to RA3 with minimal or no downtime for added steering and finest practices.
In regards to the authors