11/20/2025, 12:00:00 AM ~ 11/21/2025, 12:00:00 AM (UTC)
Recent Announcements
Validate and enforce required tags in CloudFormation, Terraform and Pulumi with Tag Policies
AWS Organizations Tag Policies announces Reporting for Required Tags, a new validation check that proactively ensures your CloudFormation, Terraform, and Pulumi deployments include the required tags critical to your business. Your infrastructure-as-code (IaC) operations can now be automatically validated against tag policies to ensure tagging consistency across your AWS environments. With this, you can ensure compliance for your IaC deployments in two simple steps: 1) define your tag policy, and 2) enable validation in each IaC tool.\n Tag Policies enables you to enforce consistent tagging across your AWS accounts with proactive compliance, governance, and control. With this launch, you can specify mandatory tag keys in your tag policies, and enforce guardrails for your IaC deployments. For example, you can define a tag policy that all EC2 instances in your IaC templates must have “Environment”, “Owner”, and “Application” as required tag keys. You can start validation by activating AWS::TagPolicies::TaggingComplianceValidator Hook in CloudFormation, adding validation logic in your Terraform plan, or activating aws-organizations-tag-policies pre-built policy pack in Pulumi. Once configured, all CloudFormation, Terraform, and Pulumi deployments in the target account will be automatically validated and/or enforced against your tag policies, ensuring that resources like EC2 instances include the required “Environment”, “Owner”, and “Application” tags. You can use Reporting for Required Tags feature via AWS Management Console, AWS Command Line Interface, and AWS Software Development Kit. This feature is available with AWS Organizations Tag Policies in AWS Regions where Tag Policies is available. To learn more, visit Tag Policies documentation. To learn how to set up validation and enforcement, see the user guide for CloudFormation, this user guide for Terraform, and this blog post for Pulumi.
AWS DMS Schema Conversion adds SAP (Sybase) ASE to PostgreSQL support with generative AI
AWS Database Migration Service (DMS) Schema Conversion is a fully managed feature of DMS that automatically assesses and converts database schemas to formats compatible with AWS target database services. Today, we’re excited to announce that Schema Conversion now supports conversions from SAP Adaptive Server Enterprise (ASE) database (formerly known as Sybase) to Amazon RDS PostgreSQL and Amazon Aurora PostgreSQL, powered by Generative AI capability.\n Using Schema Conversion, you can automatically convert database objects from your SAP (Sybase) ASE source to an to Amazon RDS PostgreSQL and Amazon Aurora PostgreSQL target. The integrated generative AI capability intelligently handles complex code conversions that typically require manual effort, such as stored procedures, functions, and triggers. Schema Conversion also provides detailed assessment reports to help you plan and execute your migration effectively. To learn more about this feature, see the documentation for using SAP (Sybase) ASE as a source for AWS DMS Schema Conversion and using SAP (Sybase) ASE as a source for AWS DMS for data migration. For details about the generative AI capability, please refer to the User Guide. For AWS DMS Schema Conversion regional availability, please refer to the Supported AWS Regions page.
Amazon RDS supports Multi-AZ for SQL Server Web Edition
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports Multi-AZ deployment for SQL Server Web Edition. SQL Server Web Edition is specifically designed to support public and internet-accessible web pages, websites, web applications, and web services, and is used by web hosters and web value-added providers (VAPs). These applications need high availability, and automated failover to recover from hardware and database failures. Now customers can use SQL Server Web Edition with Amazon RDS Multi-AZ deployment option, which provides a high availability solution. The new feature eliminates the need for customers to use more expensive options for high availability, such as using SQL Server Standard Edition or Enterprise Edition.\n To use the feature, customers simply configure their Amazon RDS for SQL Server Web Edition instance with Multi-AZ deployment option. Amazon RDS automatically provisions and maintains a standby replica in a different Availability Zone (AZ), and synchronously replicates data across the two AZs. In situations where your Multi-AZ primary database becomes unavailable, Amazon RDS automatically fails over to the standby replica, so customers can resume database operations quickly and without any administrative intervention. For more information about Multi-AZ deployment for RDS SQL Server Web Edition, refer to the Amazon RDS for SQL Server User Guide. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
Amazon OpenSearch Serverless adds AWS PrivateLink for management console
Amazon OpenSearch Serverless now supports AWS PrivateLink for secure and private connectivity to management console. With AWS PrivateLink, you can establish a private connection between your virtual private cloud (VPC) and Amazon OpenSearch Serverless to create, manage, and configure your OpenSearch Serverless resources without using the public internet. By enabling private network connectivity, this enhancement eliminates the need to use public IP addresses or relying solely on firewall rules to access OpenSearch Serverless. With this feature release the OpenSearch Serverless management and data operations can be securely accessed through PrivateLinks. Data ingestion and query operations on collections still requires OpenSearch Serverless provided VPC endpoint configuration for private connectivity as described in the OpenSearch Serverless VPC developer guide.\n You can use PrivateLink connections in all AWS Regions where Amazon OpenSearch Serverless is available. Creating VPC endpoints on AWS PrivateLink will incur additional charges; refer to AWS PrivateLink pricing page for details. You can get started by creating an AWS PrivateLink interface endpoint for Amazon OpenSearch Serverless using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. To learn more, refer to the documentation on creating an interface VPC endpoint for management console. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
Recycle Bin adds support for Amazon EBS Volumes
Recycle Bin for Amazon EBS, which helps you recover accidentally deleted snapshots and EBS-backed AMIs, now supports EBS Volumes. If you accidentally delete a volume, you can now recover it directly from Recycle Bin instead of restoring from a snapshot, reducing your recovery point objective with no data loss between the last snapshot and deletion. Your recovered volume can immediately achieve the full performance without waiting for data to download from snapshots.\n To use Recycle Bin, you can set a retention period for deleted volumes, and you can recover any volume within that period. Recovered volumes are immediately available and will retain all attributes—tags, permissions, and encryption status. Volumes not recovered are deleted permanently when the retention period expires. You create retention rules to enable Recycle Bin for all volumes or specific volumes, using tags to target which volumes to protect. EBS Volumes in Recycle Bin are billed at the same price as EBS Volumes, read more on the pricing page. To get started, read the documentation. The feature is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS commercial, China, and AWS GovCloud (US) Regions.
AWS Cloud WAN adds Routing Policy for advanced traffic control and flexible network deployments
AWS announces the general availability of Cloud WAN Routing Policy providing customers fine-grained controls to optimize route management, control traffic patterns, and customize network behavior across their global wide-area networks.\n AWS Cloud WAN allows you to build, monitor, and manage a unified global network that interconnects your resources in the AWS cloud and your on-premises environments. Using the new Routing Policy feature, customers can perform advanced routing techniques such as route filtering and summarization to have better control on routes exchanged between AWS Cloud WAN and external networks. This feature enables customers to build controlled routing environments to minimize route reachability blast radius, prevent sub-optimal or asymmetric connectivity patterns, and avoid over-running of route-tables due to propagation of unnecessary routes in global networks. In addition, this feature allows customers to set advanced Border Gateway Protocol (BGP) attributes to customize network traffic behavior per their individual needs and build highly resilient hybrid-cloud network architectures. This feature also provides advanced visibility in the routing databases to allow rapid troubleshooting of network issues in complex multi-path environments. The new Routing Policy feature is available in all AWS Regions where AWS Cloud WAN is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for enabling Routing Policy on AWS Cloud WAN. For more information, see the AWS Cloud WAN documentation pages and blog.
AWS Glue supports additional SAP entities as zero-ETL integration sources
AWS Glue now supports full snapshot and incremental load ingestion for new SAP entities using zero-ETL integrations. This enhancement introduces full snapshot data ingestion for SAP entities that lack complete change data capture (CDC) functionality, while also providing incremental data loading capabilities for SAP entities that don’t support the Operational Data Provisioning (ODP) framework. These new features work alongside existing capabilities for ODP-supported SAP entities, to give customers the flexibility to implement zero-ETL data ingestion strategies across diverse SAP environments.\n Fully managed AWS zero-ETL integrations eliminate the engineering overhead associated with building custom ETL data pipelines. This new zero-ETL functionality enables organizations to ingest data from multiple SAP applications into Amazon Redshift or the lakehouse architecture of Amazon SageMaker to address scenarios where SAP entities lack deletion tracking flags or don’t support the Operational Data Provisioning (ODP) framework. Through full snapshot ingestion for entities without deletion tracking and timestamp-based incremental loading for non-ODP systems, zero-ETL integrations reduce operational complexity while saving organizations weeks of engineering effort that would otherwise be required to design, build, and test custom data pipelines across diverse SAP application environments. This feature is available in all AWS Regions where AWS Glue zero-ETL is currently available. To get started with the enhanced zero-ETL coverage for SAP sources refer to the AWS Glue zero-ETL user guide.
Amazon MSK Serverless expands availability to South America (São Paulo) region
You can now connect your Apache Kafka applications to Amazon MSK Serverless in the South America (São Paulo) AWS Regions.\n Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK Serverless is a cluster type for Amazon MSK that allows you to run Apache Kafka without having to manage and scale cluster capacity. MSK Serverless automatically provisions and scales compute and storage resources, so you can use Apache Kafka on demand. With these launches, Amazon MSK Serverless is now generally available in Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Europe (Paris), Europe (London), South America (São Paulo), US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS regions. To learn more and get started, see our developer guide.
AWS announces availability of Microsoft SQL Server 2025 images on Amazon EC2
Amazon EC2 now supports Microsoft SQL Server 2025 with License-Included (LI) Amazon Machine Images (AMIs), providing a quick way to launch the latest version of SQL Server. By running SQL Server 2025 on Amazon EC2, customers can take advantage of the security, performance, and reliability of AWS with the latest SQL Server features.\n Amazon creates and manages Microsoft SQL Server 2025 AMIs to simplify the provisioning and management of SQL Server 2025 on EC2 Windows instances. These images support version 1.3 of the Transport Layer Security (TLS) protocol by default for enhanced performance and security. These images also come with pre-installed software such as AWS Tools for Windows PowerShell, AWS Systems Manager, AWS CloudFormation, and various network and storage drivers to make your management easier. SQL Server 2025 AMIs are available in all commercial AWS Regions and the AWS GovCloud (US) Regions. To learn more about the new AMIs, see SQL Server AMIs User Guide or read the blog post.
AWS Application Load Balancer launches Target Optimizer
Application Load Balancer (ALB) now offers Target Optimizer, a new feature that allows you to enforce a maximum number of concurrent requests on a target.\n With Target Optimizer, you can fine-tune your application stack so that targets receive only the number of requests they can process, achieving higher request success rate, more target utilization, and lower latency. This is particularly useful for compute-intensive workloads. For example, if you have applications that perform complex data processing or inference, you can configure each target to receive as few as one request at a time, ensuring the number of concurrent requests is in line with the target’s processing capabilities. You can enable this feature by creating a new target group with a target control port. Once enabled, the feature works with the help of an agent provided by AWS that you run on your targets that tracks request concurrency. For deployments that include multiple target groups per ALB, you have the flexibility to configure this capability for each target group individually. You can enable Target Optimizer through the AWS Management Console, AWS CLI, AWS SDKs, and AWS APIs. ALB Target Optimizer is available in all AWS Commercial Regions, AWS GovCloud (US) Regions, and AWS China Regions. Traffic to target groups that enable Target Optimizer generates more LCU usage than regular target groups. For more information, see the pricing page, launch blog, and ALB User Guide.
Amazon Braket introduces spending limits feature for quantum processing units
Amazon Braket now supports spending limits, enabling customers to set spending caps on quantum processing units (QPUs) to manage costs. With spending limits, customers can define maximum spending thresholds on a per-device basis, and Amazon Braket automatically validates each task submission doesn’t exceed the pre-configured limits. Tasks that would exceed remaining budgets are rejected before creation. For comprehensive cost management across all of Amazon Web Services, customers should continue to use the AWS Budgets feature as part of AWS Cost Management.\n Spending limits are particularly valuable for research institutions managing quantum computing budgets across multiple users, for educational environments preventing accidental overspending during coursework, and for development teams experimenting with quantum algorithms. Customers can update or delete spending limits at any time as their requirements change. Spending limits apply only to on-demand tasks on quantum processing units and do not include costs for simulators, notebook instances, hybrid jobs, or tasks created during Braket Direct reservations. Spending limits are available now in all AWS Regions where Amazon Braket is supported at no additional cost. Researchers at accredited institutions can apply for credits to support experiments on Amazon Braket through the AWS Cloud Credits for Research program. To get started, visit the Spending limits page in the Amazon Braket console and read our launch blog post.
Amazon EC2 Mac instances now support Apple macOS Tahoe
Starting today, customers can run Apple macOS Tahoe (version 26) as Amazon Machine Images (AMIs) on Amazon EC2 Mac instances. Apple macOS Tahoe is the latest major macOS version, and introduces multiple new features and performance improvements over prior macOS versions including running Xcode version 26.0 or later (which includes the latest SDKs for iOS, iPadOS, macOS, tvOS, watchOS, and visionOS).\n Backed by Amazon Elastic Block Store (EBS), EC2 macOS AMIs are AWS-supported images that are designed to provide a stable, secure, and high-performance environment for developer workloads running on EC2 Mac instances. EC2 macOS AMIs include the AWS Command Line Interface, Command Line Tools for Xcode, Amazon SSM Agent, and Homebrew. The AWS Homebrew Tap includes the latest versions of AWS packages included in the AMIs. Apple macOS Tahoe AMIs are available for Apple silicon EC2 Mac instances and are published to all AWS regions where Apple silicon EC2 Mac instances are available today. Customers can get started with macOS Tahoe AMIs via the AWS Console, Command Line Interface (CLI), or API. Learn more about EC2 Mac instances here or get started with an EC2 Mac instance here. You can also subscribe to EC2 macOS AMI release notifications here.
Amazon MQ now supports RabbitMQ version 4.2
Amazon MQ now supports RabbitMQ version 4.2 which introduces native support for the AMQP 1.0 protocol, a new Raft based metadata store named Khepri, local shovels, and message priorities for quorum queues. RabbitMQ 4.2 also includes various bug fixes and performance improvements for throughput and memory management.\n A key highlight of RabbitMQ 4.2 is the support of AMQP 1.0 as a core protocol offering enhanced features like modified outcome which allow consumers to modify message annotations before requeueing or dead lettering, and granular flow control, which offers benefits including letting a client application dynamically adjust how many messages it wants to receive from a specific queue. Amazon MQ has also introduced configurable resource limits for RabbitMQ 4.2 brokers which you can modify based on your application requirements. Starting from RabbitMQ 4.0, mirroring of classic queues is no longer supported. Non-replicated classic queues are still supported. Quorum queues are the only replicated and durable queue type supported on RabbitMQ 4.2 brokers, and now offer message priorities in addition to consumer priorities. To start using RabbitMQ 4.2 on Amazon MQ, simply select RabbitMQ 4.2 when creating a new broker using the m7g instance type through the AWS Management console, AWS CLI, or AWS SDKs. Amazon MQ automatically manages patch version upgrades for your RabbitMQ 4.2 brokers, so you need to only specify the major.minor version. To learn more about the changes in RabbitMQ 4.2, see the Amazon MQ release notes and the Amazon MQ developer guide. This version is available in all regions where Amazon MQ m7g type instances are available today.
Amazon Kinesis Data Streams now supports up to 50 enhanced fan-out consumers
Amazon Kinesis Data Streams now supports 50 enhanced fan-out consumers for On-demand Advantage streams. A higher fan-out limit lets customers attach many more independent, low-latency, high-throughput consumers to the same stream—unlocking parallel analytics, ML pipelines, compliance workflows, and multi-team architectures without creating extra streams or causing throughput contention. On-demand Advantage is an account-level setting that unlocks more capabilities and provides a different pricing structure for all on-demand streams in an AWS Region. On-demand Advantage offers data usage with 60% lower pricing compared to On-demand Standard, with data ingest at $0.032/GB, data retrieval at $0.016/GB, and enhanced fan-out data retrieval at $0.016/GB. High fan-out workloads are most cost effective with On-demand Advantage.\n Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Enhanced fan-out is an Amazon Kinesis Data Streams feature that enables consumers to receive records from a data stream with dedicated throughput of up to 2 MB of data per second per shard, and this throughput automatically scales with the number of shards in a stream. A consumer that uses enhanced fan-out doesn’t have to contend with other consumers that are receiving data from the stream. For accounts with On-demand Advantage enabled, you can continue to use the existing Kinesis API RegisterStreamConsumer to register new consumers to use enhanced fan-out up to the new 50 limit. Support for enhanced fan-out consumers is available in the AWS Regions listed here. For more information on Kinesis Data Streams quotas and limits, please see our documentation. For more information on On-demand Advantage, please see our documentation for On-demand Advantage.
Amazon Aurora DSQL now provides statement-level cost estimates in query plans
Amazon Aurora DSQL now provides statement-level cost estimates in query plans, giving developers immediate insight into the resources consumed by individual SQL statements. This enhancement surfaces Distributed Processing Unit (DPU) usage estimates directly within the query plan output, helping developers identify workload cost drivers, tune query performance, and better forecast resource usage.\n With this launch, Aurora DSQL appends per-category (compute, read, write, and multi-Region write) and total estimated DPU usage at the end of the EXPLAIN ANALYZE VERBOSE plan output. The feature complements CloudWatch metrics by providing fine-grained, real-time visibility into query costs. Aurora DSQL support for DPU usage in EXPLAIN ANALYZE VERBOSE plans is available in all Regions where Aurora DSQL is available. To get started, visit the Aurora DSQL Understanding DPUs in EXPLAIN ANALYZE docmentation.
Amazon Braket adds new quantum processor from Alpine Quantum Technologies (AQT)
Amazon Braket now offers access to IBEX Q1, a trapped-ion quantum processing unit (QPU) from Alpine Quantum Technologies (AQT), a new quantum hardware provider on Amazon Braket. IBEX Q1 is a 12-qubit system with all-to-all connectivity, enabling any qubit to directly interact with any other qubit without requiring intermediate SWAP gates.\n With this launch, customers now have on-demand access to AQT’s trapped-ion technology for building and testing quantum programs, and priority access via Hybrid Jobs for running variational quantum algorithms - all with pay-as-you-go pricing. Customers can also reserve dedicated capacity on this QPU for time-sensitive workloads via Braket Direct with hourly pricing and no upfront commitments. At launch, IBEX Q1 is available Tuesdays and Wednesdays from 09:00 to 16:00 UTC, providing customers in European time zones convenient access during their work hours. IBEX Q1 is accessible from the Europe (Stockholm) Region. Researchers at accredited institutions can apply for credits to support experiments on Amazon Braket through the AWS Cloud Credits for Research program. To get started with IBEX Q1, visit the Amazon Braket devices page in the AWS Management Console to explore device specifications and capabilities. You can also explore our example notebooks and read our launch blog post.
Amazon EC2 High Memory U7i instances now available in additional regions
Amazon EC2 High Memory U7i instances with 16TB of memory (u7in-16tb.224xlarge) are now available in the AWS Europe (Ireland) region, U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the AWS Asia Pacific (Hyderabad), and U7i instances with 8TB of memory (u7i-8tb.112xlarge) are now available in the Asia Pacific (Mumbai) and AWS GovCloud (US-West) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7in-16tb instances offer 16TiB of DDR5 memory, U7i-12tb instances offer 12TiB of DDR5 memory, and U7i-8tb instances offer 8TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.\n U7i-8tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.
Amazon SageMaker Unified Studio now supports long-running sessions with corporate identities
Amazon SageMaker Unified Studio now supports long-running sessions with corporate identities through AWS IAM Identity Center’s trusted identity propagation (TIP) capability. This feature enables data scientists, data engineers, and analytics professionals to achieve uninterrupted workflow continuity and improved productivity. Users can now initiate interactive notebooks from Amazon SageMaker Unified Studio and data processing sessions on Amazon EMR (EC2, EKS, Serverless) and AWS Glue that continue running in the background using their corporate credentials, even when they log off or their session expires.\n With this capability, you can now launch resource-intensive complex data processing sessions, or exploratory analytics flows and step away from your workstations without interrupting progress. Sessions automatically maintain corporate identity permissions through IAM Identity Center’s trusted identity propagation, ensuring consistent security and access controls throughout execution. You can start multi-hour or multi-day workflows knowing the jobs will persist through network disconnections, laptop shutdowns, or credential refresh cycles, with sessions running for up to 90 days (default 7 days). This eliminates the productivity bottleneck of monitoring long-running processes and enables more efficient resource utilization across data teams. Long running sessions are available in Amazon SageMaker Unified Studio in all existing SageMaker Unified Studio regions. To learn more about user background sessions, see Amazon EMR on EC2, Amazon EMR Serverless, AWS Glue and Amazon EMR on EKS documentation.
Amazon Redshift Serverless now offers 4-RPU Minimum Capacity across more aws regions
Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse base capacity configuration of 4 Redshift Processing Units (RPUs) in the AWS Asia Pacific (Thailand), Asia Pacific (Jakarta), Africa (Cape Town), Asia Pacific (Hyderabad), Asia Pacific (Osaka), Asia Pacific (Malaysia), Asia Pacific (Taipei), Mexico (Central), Israel (Tel Aviv), Europe (Spain), Europe (Milan), Europe (Frankfurt) and Middle East (UAE) regions. Amazon Redshift Serverless measures data warehouse capacity in RPUs. 1 RPU provides you 16 GB of memory. You pay only for the duration of workloads you run in RPU-hours on a per-second basis. Previously, the minimum base capacity required to run Amazon Redshift Serverless was 8 RPUs. You can start using Amazon Redshift Serverless for as low as $1.50 per hour and pay only for the compute capacity your data warehouse consumes when it is active.\n Amazon Redshift Serverless enables users to run and scale analytics without managing data warehouse clusters. The new lower capacity configuration makes Amazon Redshift Serverless suitable for both production and development environments, particularly when workloads require minimal compute and memory resources. This entry-level configuration supports data warehouses with up to 32 TB of Redshift managed storage, offering a maximum of 100 columns per table and 64 GB of memory. To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.
Amazon CloudFront now supports CBOR Web Tokens and Common Access Tokens
Amazon CloudFront now supports CBOR Web Tokens (CWT) and Common Access Tokens (CAT), enabling secure token-based authentication and authorization with CloudFront Functions at CloudFront edge locations. CWT provides a compact, binary alternative to JSON Web Tokens (JWT) using Concise Binary Object Representation (CBOR) encoding, while CAT extends CWT with additional fine grained access control including URL patterns, IP restrictions, and HTTP method limitations. Both token types use CBOR Object Signing and Encryption (COSE) for enhanced security and allow developers to implement lightweight, high-performance authentication mechanisms directly at the edge with sub-millisecond execution times.\n CWT and CAT are ideal for performance critical applications such as live video streaming platforms that need to validate viewer access tokens millions of times per second, or IoT applications where bandwidth efficiency is crucial. These tokens also provide a single, standardized method for content authentication across multi-CDN deployments, simplifying security management and preventing the need for unique configurations for each CDN provider. For example, a media company can use CAT to create tokens that restrict access to specific video content based on subscription tiers, geographic location, and device types, all validated consistently across CloudFront and other CDN providers without requiring application network calls. With CWT and CAT support, you can validate incoming tokens, generate new tokens, and implement token refresh logic within CloudFront Functions. The feature integrates seamlessly with CloudFront Functions KeyValueStore for secure key management. CWT and CAT support for CloudFront Functions is available at no additional charge in all CloudFront edge locations. To learn more about CloudFront Functions CBOR Web Token support, see the Amazon CloudFront Developer Guide.
Amazon EC2 R8i and R8i-flex instances are now available in additional AWS regions
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Sydney), Canada (Central) and US West (N. California) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i.\n R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new R8i and R8i-flex instances visit the AWS News blog.
Amazon CloudFront announces 3 new CloudFront Functions capabilities
Amazon CloudFront now supports three new capabilities for CloudFront Functions: edge location and Regional Edge Cache (REC) metadata, raw query string retrieval, and advanced origin overrides. Developers can now build more sophisticated edge computing logic with greater visibility into CloudFront’s infrastructure and precise, granular control over origin connections. CloudFront Functions allows you to run lightweight JavaScript code at CloudFront edge locations to customize content delivery and implement security policies with sub-millisecond execution times.\n Edge location metadata, includes the three-letter airport code of the serving edge location and the expected REC. This enables geo-specific content routing or compliance requirements, such as directing European users to GDPR-compliant origins based on client location. The raw query string capability provides access to the complete, unprocessed query string as received from the viewer, preserving special characters and encoding that may be altered during standard parsing. Advanced origin overrides solve critical challenges for complex application infrastructures by allowing you to customize SSL/TLS handshake parameters, including Server Name Indication (SNI). For example, multi-tenant setups may override SNI where CloudFront connects through CNAME chains that resolve to servers with different certificate domains. These new CloudFront Functions capabilities are available at no additional charge in all CloudFront edge location. To learn more about CloudFront Functions, see the Amazon CloudFront Developer Guide.
Amazon CloudWatch application map now supports un-instrumented services discovery
Application map in Amazon CloudWatch now supports un-instrumented services discovery, cross-account views, and change history, helping SRE and DevOps teams monitor and troubleshoot their large-scale distributed applications. Application map now detects and visualizes services not instrumented with Application Signals, providing out-of-the-box observability coverage in your distributed environment. In addition, it provides a single, unified view for applications, services, and infrastructure distributed across AWS accounts, enabling end-to-end visibility. Furthermore, it provides a history of recent changes, helping teams quickly correlate when a modification occurred and how it aligns with shifts in application health or performance.\n These enhancements help SRE and DevOps teams troubleshoot issues faster and operate with greater confidence in large-scale, distributed environments. For example, when latency or error rates spike, developers can now investigate recent configuration changes, and analyze dependencies across multiple AWS accounts, all from a single map. During post-incident reviews, teams can use historical change data to understand what shifted and when, improving long-term reliability. By unifying service discovery, dependency mapping, and change history, application map reduces mean-time-to-resolution (MTTR) and helps teams maintain application health across complex systems. Starting today, the new capabilities in Application Map are available at no additional cost in all AWS commercial regions (except Taipei and New Zealand). To learn more about Application Map, please visit the Amazon CloudWatch Application Signals documentation.
AWS Step Functions enhances Local Testing with TestState API
AWS Step Functions enhances the TestState API to support local unit testing of workflows, allowing you to validate your workflow logic, including advanced patterns like Map and Parallel states, without deploying state machines to your AWS account.\n AWS Step Functions is a visual workflow service capable of orchestrating over 14,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads. The TestState API now supports testing of complete workflows including error handling patterns in your local development environment. You can now mock AWS service integrations, with optional API contract validation that verifies your mocked responses match the expected responses from actual AWS services, helping ensure your workflows work correctly in production. You can integrate TestState API calls into your preferred testing frameworks such as Jest and pytest and CI/CD pipelines, enabling automated workflow testing as part of your development process. These capabilities help accelerate development by providing instant feedback on workflow definitions, enabling validation of workflow behavior in your local environment, and catching potential issues earlier in the development cycle. The enhanced TestState API is available through the AWS SDK in all AWS Regions where Step Functions is available. For a complete list of regions and service offerings, see AWS Regions. To get started, you can access the TestState API through the AWS SDK, AWS CLI, or check out the AWS Step Functions Developer Guide.
Amazon Quick Sight expands Dashboard Theme Customization
Amazon Quick Sight now supports comprehensive theming capabilities that enable organizations to maintain consistent brand identity across their analytics dashboards. Authors can customize interactive sheet backgrounds with gradient colors and angles, implement sophisticated card styling with configurable borders and opacity, and control typography for visual titles and subtitles at the theme level.\n These enhancements address critical enterprise needs including maintaining corporate visual identity and creating seamless embedded analytics experiences. With theme-level controls, organizations can ensure visual consistency across departments while enabling embedded dashboards to match host application styling. The theming capabilities are particularly valuable for embedded analytics scenarios, as the features enable dashboards to appear native within host applications, enhancing the overall professional appearance and user experience. Expanded theme capabilities is available in all supported Amazon Quick Sight regions.
Amazon SageMaker Unified Studio adds EMR on EKS support with SSO capabilities
Amazon SageMaker Unified Studio announces support for EMR on EKS as a compute resource for interactive Apache Spark sessions. This launch enables EMR on EKS capabilities such as large-scale distributed compute with automatic scaling, cost optimization, and containerized workload isolation directly within Amazon SageMaker Unified Studio. It allows customers to transition between interactive analysis and production-level data processing jobs without moving their workload between platforms.\n Building on this capability, EMR on EKS in Amazon SageMaker Unified Studio now supports corporate identity through AWS Identity Center’s trusted identity propagation. This enables seamless single sign-on and end-to-end data access traceability for interactive analytics sessions on EMR on EKS clusters. Data practitioners can access Glue Data Catalog resources using their corporate credentials from SageMaker Unified Studio’s JupyterLab environment, while administrators maintain fine-grained access controls and audit trails. This integration simplifies security governance and streamlines compliance for enterprise data workflows. EMR on EKS compute support in Amazon SageMaker Unified Studio is available in all existing SageMaker Unified Studio regions. To learn more, visit the SageMaker Unified Studio documentation.
AWS CloudTrail launches Insights for data events to automatically detect anomalies in data access
Today, AWS extends AWS CloudTrail Insights to data events. CloudTrail Insights help you identify and respond to unusual activity associated with API call rates and API error rates in your AWS accounts. Until today, Insights worked by continuously analyzing only CloudTrail management events. Now, with today’s launch, Insights also analyzes data events, thereby strengthening your ability to quickly investigate and respond to potential security or operational issues.\n Available on CloudTrail trails, Insights for data events automatically detects anomalies in data access activities, such as unexpected surges in delete Amazon S3 object API calls or increased error rates for AWS Lambda function invocations, enabling you to rapidly uncover potential security and operational issues, all without requiring you to build detection systems or export data to third-party tools. CloudTrail Insights for data events works by establishing normal baselines for data access patterns in your AWS accounts and creates a CloudTrail event when it detects anomalies. When an unusual pattern is detected, CloudTrail provides the relevant data events from the anomaly period - helping you precisely investigate what led to the anomaly. You can configure alerts to be automatically notified when potential issues occur, enabling rapid response to potential threats or issues. CloudTrail Insights for data events is available in all regions where AWS CloudTrail is available. To get started with CloudTrail Insights, see our documentation. Additional charges apply for Insights for data events. To learn more about pricing for this feature, visit the AWS CloudTrail pricing page.
Amazon EC2 C7i instances are now available in the Asia Pacific (Melbourne) Region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Asia Pacific (Melbourne) Region. C7i instances are supported by custom Intel processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.\n C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad-serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. C7i instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. Customers can attach up to 128 EBS volumes to a C7i instance vs. up to 28 EBS volumes to a C6i instance. This allows processing of larger amounts of data, scale workloads, and improved performance over C6i instances. To learn more, visit Amazon EC2 C7i Instances. To get started, see the AWS Management Console.
Amazon CloudFront now supports TLS 1.3 for origin connections
Amazon CloudFront now supports TLS 1.3 when connecting to your origins, providing enhanced security and improved performance for origin communications. This upgrade offers stronger encryption algorithms, reduced handshake latency, and better overall security posture for data transmission between CloudFront edge locations and your origin servers. TLS 1.3 support is automatically enabled for all origin types, including custom origins, Amazon S3, and Application Load Balancers, with no configuration changes required on your part.\n TLS 1.3 provides faster connection establishment through a reduced number of round trips during the handshake process, delivering up to 30% improvement in connection performance when your origin supports it. CloudFront will automatically negotiate TLS 1.3 when your origin supports it, while maintaining backward compatibility with lower TLS versions for origins that haven’t yet upgraded. This enhancement benefits applications requiring high security standards, such as financial services, healthcare, and e-commerce platforms that handle sensitive data. TLS 1.3 support for origin connections is available at no additional charge in all CloudFront edge locations. To learn more about CloudFront origin TLS, see the Amazon CloudFront Developer Guide.
Amazon EC2 introduces AMI ancestry for complete AMI lineage visibility
Amazon EC2 now provides Amazon Machine Image (AMI) ancestry that enables you to trace the complete lineage of any AMI, from its immediate parent through each preceding generation back to the root AMI. This capability gives you complete transparency into where your AMIs originated and how they’ve been propagated across regions.\n Previously, tracking AMI lineage required manual processes, custom tagging strategies, and complex record-keeping across regions. This approach was error-prone and difficult to maintain at scale, especially when AMIs were copied across multiple regions. Now, with AMI ancestry, you have full visibility into the entire generational chain of any AMI in your environment. AMI ancestry addresses critical use cases such as tracking AMIs for compliance with internal policies, identifying all potentially vulnerable AMIs when security issues are discovered in the ancestral chain, and maintaining complete visibility of an AMI’s origin across regions. AMI ancestry can be accessed using the AWS CLI, SDKs, or Console. This capability is available at no additional cost in all AWS Regions, including AWS China and AWS GovCloud (US) Regions. To learn more, please visit our documentation here.
EC2 Auto Scaling introduces instance lifecycle policy
Today, EC2 Auto Scaling announces a new feature called instance lifecycle policy. Customers can now configure a way to retain their instances when their termination lifecycle hooks fail or timeout, providing greater confidence in managing instances for graceful shutdown.\n You can add lifecycle hooks to an Auto Scaling group (ASG) to perform custom actions when an instance enters a wait state. You can choose a target service (e.g., Amazon EventBridge or AWS Lambda) to perform these actions depending on your preferred development approach. Customers use ASG lifecycle hooks to save application state, properly close database connections, back up important data from local storage, delete sensitive data/credentials, or deregister from service discovery before instance termination. Previously, both default results—continue and abandon—led to ASG terminating instances when the lifecycle hook timeout elapsed or if an unexpected failure occurred. With the new instance lifecycle policy, you can now configure retention-triggers to keep your instances in a retained state for manual intervention until you’re ready to terminate them again. This policy provides greater confidence in graceful instance termination and is especially helpful for stateful applications running on ASG. This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore). To get started, visit the EC2 Auto Scaling console or refer to our technical documentation.
Amazon EC2 Auto Scaling now supports root volume replacement through instance refresh
Today, Amazon EC2 Auto Scaling announced a new strategy, ReplaceRootVolume, within instance refresh. This feature allows customers to update the root volume of an EC2 instance without stopping or terminating the instance, while preserving other associated instance resources. The capability reduces operational complexity, simplifies software patching, and streamlines recovery from corrupted root volumes.\n Customers use instance refresh to update the instances in their Auto Scaling groups (ASGs). This feature can be useful when customers want to migrate their instances to new instance types to take advantage of the latest improvements and optimizations. Traditionally, this process involved terminating older instances and launching new ones in a controlled manner. The new ReplaceRootVolume strategy transforms how customers manage instance lifecycles and software updates in their ASGs by enabling EC2 Auto Scaling service to replace the root Amazon EBS volume for running instances without stopping them. Organizations can now implement OS-level updates and security patches more efficiently without worrying about capacity management. This is especially valuable for workloads that use specialized instance types like Mac or GPU instances. Customers with stateful applications can now refresh their fleets with more confidence that their instances data, metadata, and attachments (such as network interfaces and elastic IPs) will be preserved with the new ReplaceRootVolume strategy. This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore), at no additional cost beyond standard EC2 and EBS usage. To get started, refer to our technical documentation.
AWS Parallel Computing Service now supports Slurm REST API
AWS Parallel Computing Service (AWS PCS) now supports the Slurm REST API. This new feature enables you to programmatically submit jobs, monitor cluster status, and manage resources using HTTP requests instead of relying on command-line tools.\n AWS PCS is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads and build scientific and engineering models on AWS using Slurm. The Slurm REST API helps you automate cluster operations and integrate HPC resources into existing systems and workflows, including web portals, CI/CD pipelines, and data processing frameworks, all without the overhead of maintaining additional REST API infrastructure. This feature is available in all AWS Regions where AWS PCS is available, and there’s no additional cost to use the feature. To learn more about using the Slurm REST API, see the AWS PCS User Guide.
AWS Blogs
AWS Japan Blog (Japanese)
- Introducing Sanen Neo Phoenix’s AWS-generated AI case “Building an Automatic Basketball Scouting Report Generation System Using Amazon Bedrock and Step Functions”
- AWS Weekly Roundup: AWS Lambda, Load Balancers, Amazon DCV, Amazon Linux 2023, etc. (11/17/2025)
- AWS Lambda enhances event processing with SQS event source mapping provisioning mode
- Introducing the AWS IoT Core Device Location integration with Amazon Sidewalk
- NTT West’s AWS Case Study: Sales Support AI Bot Development Using Amazon Bedrock Knowledge Bases
- Securing City-Wide Events: An Integrated Physical and Logical Security Approach at AWS re:Invent
- AWS re: Invent 2025: A security session guide to learning about 4 transformative themes
- Introducing Anthropic solutions available on AWS
- Kiro: Does the code match the specs? ~Measuring “Correctness” with Property-Based Tests~
- Claude Code on AWS pattern explanation — Amazon Bedrock/ AWS Marketplace
AWS Open Source Blog
- Introducing Strands Agent SOPs – Natural Language Workflows for AI Agents
- Announcing ml-container-creator for easy BYOC on SageMaker
AWS Cloud Operations Blog
AWS Big Data Blog
- Enforce business glossary classification rules in Amazon SageMaker Catalog
- Enhanced data discovery in Amazon SageMaker Catalog with custom metadata forms and rich text documentation
AWS Compute Blog
- Building multi-tenant SaaS applications with AWS Lambda’s new tenant isolation mode
- Improve API discoverability with the new Amazon API Gateway Portal
AWS Contact Center
- Always on, always assuring: Unlocking continuous CX quality with cloud-based monitoring
- Implementing multi-skill forecasting and scheduling in Amazon Connect
Containers
AWS Database Blog
- Implement high availability in Amazon RDS for SQL Server Web Edition using block-level replication
- Multi-key support for Global Secondary Index in Amazon DynamoDB
- Accelerating data modeling accuracy with the Amazon DynamoDB Data Model Validation Tool
AWS for Industries
- Your guide to AWS Advertising & Marketing Technology at re:Invent 2025
- Revolutionizing healthcare with AI-driven digital pathology
- Transforming Smart Product Companies into Digital Service Leaders: A Data-Driven Marketing Architecture on AWS
Artificial Intelligence
- MSD explores applying generative Al to improve the deviation management process using AWS services
- Accelerating genomics variant interpretation with AWS HealthOmics and Amazon Bedrock AgentCore
- How Rufus scales conversational shopping experiences to millions of Amazon customers with Amazon Bedrock
- How Care Access achieved 86% data processing cost reductions and 66% faster data processing with Amazon Bedrock prompt caching
AWS for M&E Blog
Networking & Content Delivery
- Drive application performance with Application Load Balancer Target Optimizer
- AWS Cloud WAN Routing Policy: Fine-grained controls for your global network (Part 1)
- Introducing Amazon VPC Regional NAT Gateway
- Introducing AWS Site-to-Site VPN Concentrator for multi-site connectivity
AWS Security Blog
- Introducing the Landing Zone Accelerator on AWS Universal Configuration and LZA Compliance Workbook
- Transfer data across AWS partitions with IAM Roles Anywhere
- How to update CRLs without public access using AWS Private CA