5/12/2026, 12:00:00 AM ~ 5/13/2026, 12:00:00 AM (UTC)

Recent Announcements

AWS Lambda supports scheduled scaling for functions on Lambda Managed Instances

AWS Lambda now supports scheduled scaling for functions running on Lambda Managed Instances, using Amazon EventBridge Scheduler. This capability allows you to define one-time or recurring schedules that proactively adjust your function’s capacity limits ahead of expected traffic, to meet your performance targets during peak periods and avoid costs during idle periods.\n Lambda Managed Instances lets you run Lambda functions on managed Amazon EC2 instances with built-in routing, load balancing, and autoscaling. Capacity scales between your configured minimum and maximum execution environment limits based on traffic. Previously, customers with predictable traffic patterns, such as business-hours applications or marketing events, were required to manually adjust capacity limits ahead of known demand changes or build custom automation to manage scaling on a schedule. With scheduled scaling, you can now define schedules that proactively adjust your function’s capacity limits ahead of expected traffic. For example, you can schedule capacity limits to increase before business hours so execution environments are ready when the first requests arrive. You can also define a schedule that scales capacity to zero during idle periods (so you only pay when the function is actively serving traffic), and schedule it to scale back up before traffic returns. Scheduled scaling for functions running on Lambda Managed Instances is available in all AWS Regions where Lambda Managed Instances is supported. You can create schedules using the Amazon EventBridge Scheduler console, AWS CLI, AWS SDK, AWS CDK, or AWS CloudFormation. To learn more, visit the AWS Lambda Managed Instances documentation, Amazon EventBridge Scheduler documentation, AWS Lambda pricing, and Amazon EventBridge pricing.

Amazon EventBridge Scheduler adds 619 new SDK API actions, including Lambda Managed Instances

Amazon EventBridge Scheduler expands its AWS SDK integrations with 13 additional services and 619 new API actions across new and existing AWS services, including AWS Lambda Managed Instances. You can now schedule direct invocations of a broader set of AWS services without writing custom integration code.\n EventBridge Scheduler is a serverless scheduler that allows you to create, run, and manage billions of scheduled events and tasks across more than 270 AWS services, without provisioning or managing the underlying infrastructure. With this expansion, you can now schedule a broader set of AWS API actions directly from Scheduler, including scaling Lambda managed instances up or down on a time-based schedule for precise control over capacity provisioning. These enhancements are now generally available in all AWS Regions where AWS EventBridge Scheduler is available. Specific services and API actions are subject to the availability of the target service in the AWS Region. To learn more about AWS EventBridge Scheduler SDK integrations, visit the Developer Guide.

Amazon SageMaker Feature Store now supports SageMaker Python SDK V3

Amazon SageMaker Feature Store now supports the SageMaker Python SDK v3, including new capabilities for Lake Formation access controls and Apache Iceberg table properties configuration. Feature Store is a fully managed repository to store, share, and manage features for machine learning models. Data scientists can now use the modern, modular SDK v3 interfaces to manage feature groups with fine-grained access control and optimized offline storage.\n Data scientists can use the SageMaker Python SDK v3 to manage feature groups with streamlined workflows and reduced boilerplate. With Lake Formation integration, data scientists can enforce column-level and row-level access control on offline store data through an opt-in setting at feature group creation. With Iceberg properties support, data scientists can configure additional table properties such as compaction and snapshot expiration directly through the SDK to optimize storage and query performance. These capabilities allow data scientists to govern access to feature data and optimize offline store performance from a single SDK without managing separate tools. These capabilities are available in all AWS Regions where Amazon SageMaker Feature Store is available. To get started, install SageMaker Python SDK v3.8.0 or later. For more information, see Lake Formation access controls and Iceberg metadata management documentation.

Karpenter now supports Amazon Application Recovery Controller zonal shift

Amazon Elastic Kubernetes Service (Amazon EKS) now supports Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift when using the open source Karpenter project for compute provisioning. ARC helps you manage and coordinate recovery for your applications across AWS Regions and Availability Zones (AZs). With this launch, you can better maintain Kubernetes application availability by automating the process of shifting in-cluster network traffic away from an impaired AZ.\n Customers increasingly deploy highly available applications in Amazon EKS across multiple AZs to eliminate a single point of failure. With ARC zonal shift, you can temporarily mitigate an AZ impairment by redirecting in-cluster network traffic away from the impacted AZ. For a fully automated experience, authorize AWS to manage this on your behalf using ARC zonal autoshift, which includes practice runs to verify your cluster functions as expected with one less AZ. When a zonal shift is activated for your EKS cluster, Karpenter stops provisioning new capacity in the impaired AZ, halts voluntary disruptions such as consolidation and drift for nodes in that AZ, and prevents voluntary disruptions in healthy zones if they depend on scheduling pods to the impaired zone. Pods with strict scheduling requirements such as volume affinities that require the impaired zone will not trigger launch attempts. When the zonal shift expires or is canceled, Karpenter resumes normal operations. This Karpenter feature works with both manual zonal shifts and zonal autoshifts. No custom ARC resources are required as Karpenter integrates directly with the existing EKS cluster ARC resource. To enable zonal shift support, set the ENABLE_ZONAL_SHIFT setting in your Karpenter settings. To learn more, visit the Karpenter documentation and the ARC zonal shift documentation.

Amazon Redshift launches RG instances powered by AWS Graviton

Amazon Redshift announces the general availability of RG instances, a new generation of provisioned cluster nodes powered by AWS Graviton processors that deliver better performance, running data warehouse and data lake workloads up to 2.4x as fast as previous generation RA3 instances, at 30% lower price per vCPU. RG instances include Redshift’s custom-built vectorized data lake query engine that processes Apache Iceberg and Parquet data on your cluster nodes — enabling you to run SQL analytics across your data warehouse and data lake using a single engine. This eliminates the need for Redshift Spectrum’s separate scanning fleet and its associated per-terabyte charges.\n Whether you’re running structured data warehouse workloads on Redshift Managed Storage or querying open-format data lake tables in Amazon S3, RG instances deliver significant performance improvements — up to 2.2x as fast as RA3 instances for data warehouse workloads, up to 2.4x as fast for Apache Iceberg queries, and up to 1.5x as fast for Parquet workloads. The natively built data lake engine features a purpose-built I/O subsystem with smart prefetch, NVMe caching, vectorized Parquet scans, and advanced file and partition-level pruning. Just-in-Time (JIT) Analyze delivers consistently fast queries without manual tuning — automatically collecting and updating table statistics as your data and workload patterns evolve. Intelligent NVMe caching keeps frequently accessed datasets close to compute, reducing round-trips to your data lake for faster response times on repeated queries. RG instances are available at launch in two instance sizes — rg.xlarge and rg.4xlarge. Existing RA3 clusters can migrate using Snapshot & Restore, Elastic Resize, or Classic Resize. RG instances are available with flexible pricing options, including On-Demand, and 1-year and 3-year Reserved Instances with No Upfront payment. For pricing details, visit the Amazon Redshift pricing page.

Amazon Redshift RG instances are now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Malaysia), Asia Pacific (Hyderabad), Asia Pacific (Taiwan), and Asia Pacific (Melbourne).

To get started, refer to the following resources:

Amazon Redshift RG Instance Documentation

RA3 to RG Upgrade Guide

Amazon Redshift Pricing

Amazon CloudFront Premium flat-rate plan now supports configurable usage allowances

Previously, the Amazon CloudFront Premium flat-rate plan supported a single usage allowance, and customers who outgrew it needed to contact us to discuss custom pricing options. Now, the Premium plan offers a range of self-service monthly usage levels ranging from 500 million to 6 billion requests and 50 TB to 600 TB, so customers can scale within the plan as their applications grow. Enterprises and mid-sized businesses whose baseline traffic previously made them ineligible for flat-rate plans can now adopt the Premium plan at a usage level that fits their application.\n You select your Premium plan usage level in the CloudFront console, see your new monthly flat-rate price instantly, and can change your usage level at any time with no commitment required. All Premium plan features are included at every usage level. Flat-rate plans provide a single monthly price covering content delivery, AWS WAF and DDoS protection, bot management, Amazon Route 53 DNS, Amazon CloudWatch Logs ingestion, serverless edge compute, and Amazon S3 storage credits — with no overage charges.

To get started, visit the CloudFront console. To learn more, refer to the Launch Blog or Amazon CloudFront Developer Guide.

Amazon Connect Customer now supports embedding Cases and Customer Profiles in custom agent applications

Amazon Connect Customer now enables you to embed Cases and Customer Profiles into custom agent applications, helping agents access case details and customer context alongside the tools they already use to resolve issues. Developers can use the Amazon Connect SDK to bring native Connect experiences into custom applications, reducing the need to build and maintain these capabilities from scratch.\n The Amazon Connect SDK is available in all AWS Regions where Amazon Connect Customer is available. To learn more and get started, visit the administrator guide and developer guide.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Architecture Blog

AWS Big Data Blog

Containers

AWS Database Blog

Artificial Intelligence

Networking & Content Delivery

AWS Security Blog

Open Source Project

Amplify UI