12/18/2025, 12:00:00 AM ~ 12/19/2025, 12:00:00 AM (UTC)

Recent Announcements

Amazon EC2 now supports Availability Zone ID across its APIs

Amazon EC2 now supports Availability Zone ID (AZ ID) parameter, enabling you to create and manage resources such as instances, volumes, and subnets using consistent zone identifiers. AZ IDs are consistent and static identifiers that represent the same physical location across all AWS accounts, helping you optimize resource placement.\n Prior to this launch, you had to use an AZ name while creating a resource, but these names could map to different physical locations. This mapping made it difficult to ensure resources were always co-located especially when operating with multiple accounts. Now, you can specify the AZ ID parameter directly in your EC2 APIs to guarantee consistent placement of resources. AZ IDs always refer to the same physical location across all accounts, which means you no longer need to manually map AZ names across your accounts or deal with the complexity of tracking and aligning zones. This capability is now available for resources including instances, launch templates, hosts, reserved instances, fleet, spot instances, volumes, capacity reservations, network insights, VPC endpoints and subnets, network interfaces, fast snapshot restore, and instance connect. This feature is available in all AWS regions including China and AWS GovCloud (US) Regions. To learn more about Availability Zone IDs, visit the documentation.

Amazon WorkSpaces Applications announces Elastic fleets powered by Ubuntu Pro 24.04 LTS

Amazon WorkSpaces Applications now offers support for Ubuntu Pro 24.04 LTS on Elastic fleets, enabling Independent Software Vendors (ISVs) and central IT organizations to stream Ubuntu desktop applications to users while leveraging the flexibility, scalability, and cost-effectiveness of the AWS Cloud.\n Amazon WorkSpaces Applications is a fully managed, secure desktops and applications streaming service that provides users with instant access to their desktops and applications from anywhere. Within Amazon WorkSpaces Applications, Elastic fleet is a server less fleet type that lets you stream desktop applications to your end users from an AWS-managed pool of streaming instances without needing to predict usage, create and manage scaling policies, or create an image. Elastic fleet type is designed for customers that want to stream applications to users without managing any capacity or creating WorkSpaces Applications images. To get started sign into the WorkSpaces Applications management console and select one of the AWS Region of your choice. For the full list of Regions where WorkSpaces Applications is available, see the AWS Region Table. Amazon WorkSpaces Applications offers pay-as-you-go pricing. For more information, see Amazon WorkSpaces Applications Pricing.

Amazon EC2 C8a instances now available in the Europe (Spain) region

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Europe (Spain) region. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances.\n C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.

AWS IoT Core adds message batching to HTTP rule action

AWS IoT Core now lets you batch multiple IoT messages into a single HTTP rule action, before routing the messages to downstream HTTP endpoints. This enhancement helps you to reduce cost and throughput overhead when ingesting telemetry from your Internet of Things (IoT) workloads.\n AWS IoT Core is a fully managed service that securely connects millions of IoT devices to the AWS cloud. Using rules for AWS IoT, you can filter, process, and decode device data, and route that data to AWS services or third-party endpoints via 20+ AWS IoT rule actions, such as HTTP rule action - which routes the data to HTTP endpoints. With the new feature, you can now batch messages together before routing that data set to downstream HTTP endpoints. To efficiently process IoT messages using the new batching capability, connect your IoT devices to AWS IoT Core and define a HTTP rule action with your desired batch parameters. AWS IoT Core will then process incoming messages according to these specifications and route the messages to your designated HTTP endpoints. For example, you can now combine IoT messages published from multiple smart home devices in a single batch and route it to a HTTP endpoint in your smart home platform. This new feature is available in all AWS Regions where AWS IoT Core is available, including AWS GovCloud (US) and Amazon China Regions. To learn more, visit our developer guide, pricing page, and API documentation.

Amazon WorkSpaces now supports IPv6

Amazon WorkSpaces now supports IPv6 for WorkSpaces domains and external endpoints, enabling users to connect through an IPv4/IPv6 dual-stack configuration from compatible clients (excluding SAML authentication). This helps customers meet IPv6 compliance requirements and eliminates the need for costly networking equipment to handle address translation between IPv4 and IPv6.\n Dual-stack support for WorkSpaces addresses the Internet’s growing demand for IP addresses by offering a vastly larger address space than IPv4. This eliminates the need to manage overlapping address ranges within your Virtual Private Cloud (VPC). Customers can deploy WorkSpaces through dual-stack that supports both IPv4 and IPv6 protocols while maintaining backward compatibility with existing IPv4 systems. Customers can also connect to their WorkSpaces through PrivateLink VPC endpoints over IPv6, enabling them to access the service privately without routing traffic over the public internet. Connecting to Amazon WorkSpaces over IPv4/IPv6 dual-stack configuration is supported in all AWS Regions where Amazon WorkSpaces is available, including the AWS GovCloud (US East & US West) Regions. There is no additional cost for this feature. To enable IPv6, you must use the latest WorkSpaces client application for Windows, macOS, Linux, PCoIP zero clients, or web access. To learn more about IPv6 support on Amazon WorkSpaces, refer to the Amazon WorkSpaces administration guide.

Amazon MSK Connect now supports dual-stack (IPv4 and IPv6) connectivity for new connectors

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports dual-stack connectivity (IPv4 and IPv6) for new connectors on Amazon MSK Connect. This capability enables customers to create connectors on MSK Connect using both IPv4 and IPv6 protocols, in addition to the existing IPv4-only option. It helps customers modernize applications for IPv6 environments while maintaining IPv4 compatibility, making it easier to meet compliance requirements and prepare for future network architectures.\n Amazon MSK Connect is a fully managed service that allows you to deploy and operate Apache Kafka Connect connectors in a fully managed environment. Previously, connectors on MSK Connect only supported IPv4 addressing for all connectivity options. With this new capability, customers can now enable dual-stack connectivity (IPv4 and IPv6) on new connectors using the Amazon MSK Console, AWS CLI, SDK, or CloudFormation by setting the Network Type parameter during connector creation. All connectors on MSK Connect will by default use IPv4-only connectivity unless explicitly opted-in for dual-stack while creating new connectors. Existing connectors will continue using IPv4 connectivity. To change that you will need to delete and recreate the connector. Dual-stack connectivity for new connectors on MSK Connect is now available in all AWS Regions where Amazon MSK Connect is available, at no additional cost. To learn more about Amazon MSK dual-stack support, refer to the Amazon MSK developer guide.

Amazon ECS Managed Instances now supports Amazon EC2 Spot Instances

Amazon ECS Managed Instances now supports Amazon EC2 Spot Instances, extending the range of capabilities available with AWS-managed infrastructure. With this launch, you can leverage spare EC2 capacity at up to 90% discount compared to On-Demand prices for fault-tolerant workloads, while AWS handles infrastructure management.\n ECS Managed Instances is a fully managed compute option designed to eliminate infrastructure management overhead, dynamically scale EC2 instances to match your workload requirements and continuously optimize task placement to reduce infrastructure costs. You can simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances capacity provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer. With today’s launch, you can additionally configure a new parameter, capacityOptionType, as spot or on-demand in your capacity provider configuration. Support for EC2 Spot Instances is available in all AWS Regions that Amazon ECS Managed Instances is available. You will be charged for the management of compute provisioned, in addition to your spot Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.

Amazon EC2 R8i and R8i-flex instances are now available in additional AWS regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Seoul), South America (Sao Paulo), and Asia Pacific (Tokyo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i.\n R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new R8i and R8i-flex instances visit the AWS News blog.

AWS Direct Connect now supports resilience testing with AWS Fault Injection Service

AWS Direct Connect now supports resilience testing with AWS Fault Injection Service (FIS), a fully managed service for running controlled fault injection experiments to improve application performance, observability, and resilience. With this capability, you can test and observe how your applications respond when Border Gateway Protocol (BGP) sessions over your Virtual Interfaces are disrupted and validate your resilience mechanisms.\n With this new capability, you can test how your applications handle Direct Connect BGP failover in a controlled environment. For example, you can validate that traffic routes to redundant Virtual Interfaces when a primary Virtual Interface’s BGP session is disrupted and your applications continue to function as expected. This capability is particularly valuable for proactively testing Direct Connect architectures where failover is critical to maintaining network connectivity. This new action is available in all AWS Commercial Regions where AWS FIS is offered. To learn more, visit the AWS FIS product page and the Direct Connect FIS actions user guide.

AWS Lambda durable functions are now available in 14 additional AWS Regions

AWS Lambda durable functions enable developers to build reliable multi-step applications and AI workflows within the Lambda developer experience. Starting today, durable functions are available in 14 additional AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Milan) Europe (Stockholm), Europe (Spain), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Malaysia), and Asia Pacific (Thailand).\n Lambda durable functions extend the Lambda programming model with new primitives in your event handler, such as “steps” and “waits”, allowing you to checkpoint progress, automatically recover from failures, and pause execution without incurring compute charges for on-demand functions. With this region expansion, you can orchestrate complex processes such as order workflows, user onboarding, and AI-assisted tasks closer to your users and data, helping you to meet low-latency and data residency requirements while standardizing on a single serverless programming model. You can activate durable functions for new Python (versions 3.13 and 3.14) or Node.js (versions 22 and 24) based Lambda functions using the AWS Lambda API, AWS Management Console, or AWS SDK. You can also use infrastructure as code tools such as AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), and the AWS Cloud Development Kit (AWS CDK). For more information on durable functions, visit the AWS Lambda Developer Guide. To learn about pricing, visit AWS Lambda pricing. For the latest region availability, visit the AWS Capabilities by Region page.

AWS Clean Rooms now supports change requests for existing collaborations

AWS Clean Rooms now supports change requests to modify existing collaboration settings, offering customers greater flexibility in managing collaborations and developing new use cases with their partners. With this new capability, you can submit a change request for a collaboration, including adding new members, updating member abilities, and modifying collaboration auto-approval settings. To maintain security, all collaboration members must approve change requests before updates take affect, ensuring that existing privacy controls remain protected. For transparency, all change requests are logged in the change history for member review. For example, when a publisher creates a Clean Rooms collaboration with an advertiser, the publisher can add the advertiser’s marketing agency as a new member that can receive the analysis results directly in their account, enabling faster time-to-insights and streamlined campaign optimizations with the publisher. This approach reduces onboarding time while maintaining the existing privacy controls for you and your partners.\n With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

Announcing 176 new AWS Security Hub controls in AWS Control Tower

Today, AWS announces that AWS Control Tower supports an additional 176 Security Hub controls in Control Catalog for use cases such as security, cost, durability, and operations. With this launch, you can now search, discover, enable and manage these additional controls directly from AWS Control Tower and govern more use cases for your multi-account environment.\n To get started, in AWS Control Tower go to the Control Catalog and search for controls with the Control owner filter set to AWS Security Hub , you will then see all the AWS Security Hub controls present in the Catalog. If you find controls that are relevant for you, you can then directly enable them from the AWS Control Tower console. You can also use ListControls, GetControl and EnableControl APIs. You can search the new AWS Config rules in all AWS Regions where AWS Control Tower is available, including AWS GovCloud (US). When you want to deploy a rule, reference the list of supported regions for that rule to see where it can be enabled. To learn more, visit the AWS Control Tower User Guide.

Amazon ECS now enables you to define weekly event windows for scheduling task retirements on AWS Fargate

Amazon ECS now enables you to define weekly event windows for scheduling task retirements on AWS Fargate. This capability provides precise control over when infrastructure updates and task replacements occur, helping prevent disruption to mission-critical workloads during peak business hours.\n AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. As part of the AWS shared responsibility model, Fargate maintains the underlying infrastructure with periodic platform updates. Fargate automatically retires your tasks for these updates and notifies you about upcoming task retirements via email and the AWS Health Dashboard. By default, tasks are retired 7 days after notification, but you can configure the fargateTaskRetirementWaitPeriod account setting to extend the retirement period to 14 days or initiate immediate retirement (0 days). Previously, you could build automation using the task retirement notification and wait period to perform service updates or task replacements on your own cadence. With today’s launch, you can now use the Amazon EC2 event windows interface to define weekly event windows for precise control over the timing of Fargate task retirements. For example, you can schedule task retirements for a mission-critical service that requires high uptime during weekdays by configuring retirements to occur only on weekends. To get started, configure the AWS account setting fargateEventWindows to enabled as a one-time set up. Once enabled, configure Amazon EC2 event window(s) by specifying time ranges, and associate the event window(s) with your ECS tasks by selecting Amazon ECS-managed tags as the association target. Use the aws:ecs:clusterArn tag for targeting your tasks in an ECS cluster, aws:ecs:serviceArn tag for ECS services, or aws:ecs:fargateTask with a value of true to apply the window to all Fargate tasks. This feature is now available in all commercial AWS Regions. To learn more, visit our documentation.

Amazon SES announces email validation

Today, Amazon Simple Email Service (SES) announces email validation, a new capability that helps customers reduce bounce rates and protect sender reputation by validating email addresses before sending. Customers can validate individual addresses via API calls or enable automatic validation across all outbound emails.\n Email validation helps customers maintain list hygiene, reduce bounces and improve delivery by identifying invalid addresses that could damage sender reputation. The API provides detailed validation insights such as syntax checks and DNS records. With Auto Validation enabled, SES automatically reviews every outbound email address with out requiring any code changes. Auto-Validation can be configured at the account level or at the configuration set level using simple toggles in the AWS console, enabling seamless integration with existing workflows.

Email validation is available in all AWS Regions where Amazon SES is available.

To learn more, see the documentation on Email Validation in the Amazon SES Developer Guide. To start using Email Validation, visit the Amazon SES console.

Amazon MSK introduces KRaft support for Express Brokers with Apache Kafka v3.9

Amazon Managed Streaming for Apache Kafka (MSK) now supports Apache Kafka version 3.9 for Express Brokers. This release introduces support for KRaft (Kafka Raft), Apache Kafka’s new consensus protocol that eliminates the dependency on Apache ZooKeeper for metadata management. KRaft shifts metadata management in Kafka clusters from external Apache ZooKeeper nodes to a group of controllers within Kafka. This change allows metadata to be stored and replicated as topics within Kafka brokers, resulting in faster propagation of metadata.\n New Express Broker clusters created using Kafka v3.9 will automatically use KRaft as the metadata management mode, giving you the benefits of this modern architecture from the start. The ability to upgrade existing clusters to v3.9 will be available in a future release.

Amazon MSK Express Brokers with Kafka v3.9 are available in all AWS regions where MSK Express is supported. To get started, create a new Express Broker cluster and select Kafka version 3.9 in the AWS Management Console or via the AWS CLI or AWS SDKs.

Amazon Neptune Database is now available in the AWS Europe (Zurich) Region

Amazon Neptune Database is now available in the Europe (Zurich) Region on engine versions 1.4.5.0 and later. You can now create Neptune clusters using R5, R5d, R6g, R6i, X2iedn, T4g, and T3 instance types in the AWS Europe (Zurich) Region.\n Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production. To get started, you can create a new Neptune cluster using the AWS Management Console, AWS CLI, or a quickstart AWS CloudFormation template. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Big Data Blog

AWS Contact Center

Containers

AWS Database Blog

AWS DevOps & Developer Productivity Blog

AWS for Industries

Artificial Intelligence

AWS Storage Blog

Open Source Project

AWS CLI

AWS CDK

Amplify for iOS

Amazon EKS Anywhere