Amazon Redshift Unleashes Graviton-Powered RG Instances: Up to 2.2x Faster, 30% Cheaper Per vCPU
Breaking News — Amazon Web Services today launched a new instance family for Amazon Redshift—the RG instances—powered by its custom AWS Graviton processors. These instances promise data warehouse workloads running up to 2.2 times faster than the previous RA3 generation, while slashing the price per vCPU by 30%.
The integrated data lake query engine lets organizations run SQL analytics across both Redshift data warehouses and Amazon S3 data lakes from a single engine, accelerating Apache Iceberg queries up to 2.4x and Apache Parquet queries up to 1.5x relative to RA3 instances.
“This blend of speed, cost efficiency, and an integrated data lake query engine makes Redshift RG instances well-suited to handle the high query volumes and low-latency requirements of today’s analytics and agentic AI workloads,” said Rahul Pathak, Vice President of Amazon Redshift at AWS, in an exclusive interview.
Background
Amazon Redshift has powered cloud data warehouses since 2013, evolving from dense compute to RA3 instances and serverless options. Over the past decade, organizations have increasingly used both structured warehouse tables and cost-effective data lakes to manage growing data volumes.

The rise of AI agents—programs that query data warehouses at scales far exceeding human usage—has driven operational costs higher. In March 2026, Redshift already improved BI dashboard and ETL performance by up to 7x for new queries, targeting low-latency SQL needs.
Today's RG instances represent the next architectural leap. They use AWS Graviton processors, ARM-based chips designed for high efficiency, to deliver a step-change in price-performance for analytics workloads.

What This Means
RG instances directly address the cost and complexity of combined data warehouse and data lake environments. By running both workload types from a single query engine, customers can simplify operations and reduce total analytics costs.
The new instance family is especially valuable for organizations deploying autonomous AI agents that need near-real-time responses. Faster query execution at lower cost per vCPU means that high-volume agentic workloads become more economical to run.
Below is a comparison of recommended RG instances against current RA3 instances:
- ra3.xlplus → rg.xlarge: 4 vCPUs, 32 GB memory (for small cluster departmental analytics)
- ra3.4xlarge → rg.4xlarge: 16 vCPUs (1.33:1 ratio), 128 GB memory (1.33:1 ratio) for standard production workloads
Getting started is straightforward. You can launch new clusters or migrate existing ones through the AWS Management Console, AWS CLI, or AWS API. The integrated data lake query engine is enabled by default, so no additional configuration is needed.
For estimated savings, AWS recommends using the AWS Pricing Calculator with your specific workload patterns.
Related Articles
- How to Capitalize on AI-Driven Cloud Growth: A Step-by-Step Guide from Big Tech Earnings
- How to Tailor Cloud Provider Observability Views for AWS, Azure, and GCP in Grafana Cloud
- Tailor Cloud Observability Dashboards for AWS, Azure, and GCP in Grafana Cloud
- 5 Critical Steps to Deploy ClickHouse Securely with Docker Hardened Images
- Scaling Sovereign Clouds: Azure Local Expands to Thousands of Nodes
- Kubernetes v1.36 Enhances Memory QoS with Tiered Protection and Opt-In Reservations
- Cloudflare Restructures Workforce for an AI-Driven Future
- How Server-Side Sharding Reduces API Server Load in Kubernetes v1.36