9/16/2025, 12:00:00 AM ~ 9/17/2025, 12:00:00 AM (UTC)

Recent Announcements

Amazon EKS introduces a new catalog of community add-ons in the AWS GovCloud (US) Regions

Today, Amazon Elastic Kubernetes Service (EKS) announced a new catalog of community add-ons that includes metrics-server, kube-state-metrics, cert-manager, prometheus-node-exporter, fluent-bit, and external-dns. This enables you to easily find, select, configure, and manage popular open-source Kubernetes add-ons directly through EKS. Each add-on has been packaged, scanned, and validated for compatibility by EKS, with container images securely hosted in an EKS-owned private Amazon Elastic Container Registry (ECR) repository.\n To make Kubernetes clusters production-ready, you need to integrate various operational tools and add-ons. These add-ons can come from various sources including AWS and open-source community repositories. Now, EKS makes it easy for you to access a broader selection of add-ons, providing a unified management experience for AWS and community add-ons. You can view available add-ons, compatible versions, configuration options, and install and manage them directly through the EKS Console, API, CLI, eksctl, or IaC tools like AWS CloudFormation. This feature is available in all AWS GovCloud (US) Regions. To learn more visit the EKS documentation.

Amazon EC2 I7i instances now available in South America (São Paulo), Canada West (Calgary) regions

Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in the AWS South America (São Paulo), Canada West (Calgary) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these new instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.\n I7i instances offer the best compute and storage performance for x86-based storage optimized instances in Amazon EC2, ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access the small to medium size datasets (multi-TBs). Additionally, torn write prevention feature support up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.

Amazon Lex provides generative AI based enhanced natural language understanding in eight new languages

Amazon Lex now allows you to leverage large language models (LLMs) to improve the natural language understanding of your deterministic conversational AI bots in eight new languages: Chinese, Japanese, Korean, Portuguese, Catalan, French, Italian, and German. With this capability, your voice- and chat-bots can better handle complex utterances, maintain accuracy despite spelling errors, and extract key information from verbose inputs to fulfill the customer’s request. For example, a customer could say ‘Hi I want to book a flight for my wife, my two kids and myself’, and the LLM will properly identify to book flight tickets for four people.\n This feature is available in 10 commercial AWS Regions where Amazon Connect is available: Europe (Ireland), Europe (Frankfurt), US East (N. Virginia), Asia Pacific (Seoul), Europe (London), Asia Pacific (Tokyo), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central). To learn more about this feature, visit Amazon Lex documentation or to learn how Amazon Connect and Amazon Lex deliver cloud-based conversational AI experiences for contact centers, please visit the Amazon Connect website.

New fault action in AWS FIS to inject I/O latency on Amazon EBS volumes

Today, Amazon EBS announced a new latency injection action in AWS Fault Injection Service (FIS), a fully managed service for running fault injection experiments. You can now use this action to inject I/O latency on your volumes as part of a controlled testing experiment to understand how your mission-critical applications respond to storage faults. With the new fault action, you can test your architecture against elevated storage latency, allowing you to observe application behavior and fine-tune your monitoring and recovery processes to ensure high availability.\n EBS volumes are designed to meet the needs of highly available, latency-sensitive applications such as Oracle, SAP HANA, and Microsoft SQL Server. The latency injection action simulates degraded I/O performance on your volume to replicate the real-world signals, such as Amazon CloudWatch alarms and operating system timeouts, that occur during storage performance issues. Using this action, you can build confidence that your application can withstand and quickly recover from disruptions that cause high I/O latency on your EBS volume. To get started, you can directly use the pre-defined latency injection experiment templates available in the EBS and FIS consoles. Alternatively, you can customize these experiment templates or create your own experiment templates to meet your application-specific testing needs. You can integrate these latency injection experiments into your existing chaos engineering tests, continuous integration, and release testing, as well as combine multiple FIS actions in one experiment. This new action is available in all AWS Regions where AWS FIS is available. To learn more, visit the EBS FIS actions user guide.

Amazon EC2 supports detailed performance stats on all NVMe local volumes

Today, Amazon announced the availability of detailed performance statistics for Amazon EC2 instance store NVMe volumes. This new capability delivers real-time visibility into the performance of your AWS Nitro System-based EC2 instance store NVMe volumes, making it easier to monitor storage health and quickly resolve application performance issues.\n With EC2 detailed performance statistics, you can access 11 comprehensive metrics at one second granularity to monitor input/output (I/O) statistics of your locally attached NVMe volumes, including queue length measurements, IOPS, throughput, and detailed I/O latency histograms. These metrics are similar to the detailed performance statistics available for EBS volumes, providing a consistent monitoring experience across both storage types. The granular visibility provided by these metrics helps you identify specific workloads affected by performance variations, and optimize your application’s IO patterns for maximum efficiency. Additionally, the metrics include latency histograms broken down by IO size, providing even more detailed insights into performance patterns. Detailed performance statistics for EC2 instance store NVMe volumes are available by default for all Nitro-based EC2 instances with locally attached NVMe volumes across all AWS Commercial and China Regions, at no additional charge.

To learn more about the EC2 instance store NVMe detailed performance statistics and how to access them, please visit the documentation here.

Amazon AppStream 2.0 adds support for fractional GPU instances

Today, Amazon AppStream 2.0 announces support for Graphics G6 instances with fractionalized GPU sizes, which are built on the EC2 G6 family, designed to cater to graphics applications that need smaller GPU fractions.\n Graphics G6 instances with fractionalized GPU sizes (G6f and Gr6f) allow users to utilize only the GPU resources they need, rather than provisioning full GPU instances. This approach helps enable better resource optimization through shared GPU capacity, offering flexibility to choose smaller GPU fractions (such as 1/2, 1/4, or 1/8) that align with specific workload requirements. Organizations can benefit from reduced costs by avoiding over-provisioning while maintaining access to GPU capabilities for applications that don’t require full GPU power. These new instance types are available in 10 AWS Regions, including US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, London), Asia Pacific (Tokyo, Mumbai, Sydney), and South America (Sao Paulo). AppStream 2.0 offers pay-as-you-go pricing, see Amazon AppStream 2.0 Pricing for more information. To get started, select an AppStream 2.0 Graphics G6 instances with fractionalized GPU sizes when launching an image builder or creating a new fleet. You can launch Graphics G6 instances with fractionalized GPU sizes using either the AWS management console or the AWS SDK. To learn more, see AppStream 2.0 Instance Families.

AWS Storage Gateway now supports IPv6

AWS Storage Gateway announces Internet Protocol version 6 (IPv6) support for AWS Storage Gateway endpoints, APIs, and gateway appliance interfaces. This enhancement enables both IPv6 and IPv4 access to our new dual-stack endpoints. The existing AWS Storage Gateway endpoints supporting IPv4 only will remain available for backwards compatibility.\n AWS Storage Gateway provides on-premises access to data stored in AWS storage. With this launch, customers can standardize their applications and workflows for managing their AWS Storage Gateway resources on IPv6 while maintaining backward compatibility with IPv4 clients. By using the new dual-stack capabilities in the Storage Gateway appliances, service endpoints, and APIs, customers can transition from IPv4 to IPv6 gradually without needed to switch all their networking at once. AWS Storage Gateway support for IPv6 is available in all AWS Regions where the service is offered. To learn more, visit the AWS Storage Gateway user guide.

Amazon Aurora PostgreSQL Limitless Database is now available in the AWS GovCloud (US-East, US-West) Regions

Aurora PostgreSQL Limitless Database, now available in AWS GovCloud (US-East, US-West) Regions, makes it easy for you to scale your relational database workloads by providing a serverless endpoint that automatically distributes data and queries across multiple Amazon Aurora Serverless instances while maintaining the transactional consistency of a single database. Aurora PostgreSQL Limitless Database offers capabilities such as distributed query planning and transaction management, removing the need for you to create custom solutions or manage multiple databases to scale. As your workloads increase, Aurora PostgreSQL Limitless Database adds additional compute resources while staying within your specified budget, so there is no need to provision for peak, and compute automatically scales down when demand is low.\n Aurora PostgreSQL Limitless Database is available with PostgreSQL 16.6, 16.8, and 16.9 compatibility in these regions. For pricing details and Region availability, visit Amazon Aurora pricing. To learn more, read the Aurora PostgreSQL Limitless Database documentation and get started by creating an Aurora PostgreSQL Limitless Database in only a few steps in the Amazon RDS console.

Amazon S3 now supports conditional deletes in S3 general purpose buckets

Amazon S3 now supports conditional deletes in S3 general purpose buckets, which verify that an object is unchanged before deleting it. This helps you to prevent accidental deletions in high-concurrency, multiple-writer scenarios.\n You can now perform conditional deletes using the HTTP if-match header with an ETag value. Amazon S3 will only allow your delete request to succeed if the Etag provided matches that of the object. Additionally, you can use the s3:if-match condition key in your S3 bucket policies to enforce conditional delete operations. For example, you can require clients to use the HTTP if-match header in both S3 DeleteObject and S3 DeleteObjects API requests, helping you to minimize the risk of accidentally deleting objects in your bucket. Conditional deletes are available in S3 general purpose buckets at no additional cost in all AWS Regions. You can use the Amazon S3 API, SDKs, and CLI to perform conditional deletes. To learn more, visit the S3 User guide.

Amazon EC2 R8i and R8i-flex instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Malaysia, Singapore) and Europe (Frankfurt) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% better performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i.\n R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new R8i and R8i-flex instances visit the AWS News blog.

AWS Transfer Family is now available in AWS Asia Pacific (Taipei) region

Customers in AWS Asia Pacific (Taipei) Region can now use AWS Transfer Family for file transfers over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2).\n AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over SFTP, FTP, FTPS and AS2 protocols. In addition to file transfers, Transfer Family enables common file processing and event-driven automation for managed file transfer (MFT) workflows, helping customers to modernize and migrate their business-to-business file transfers to AWS. To learn more about AWS Transfer Family, visit our product page and user-guide. See the AWS Region Table for complete regional availability information.

Amazon OpenSearch Service announces Star-Tree Index

OpenSearch has introduced Star-Tree Index, a new feature that significantly improves aggregation performance for high-cardinality and multi-dimensional queries. This index pre-aggregates data across configured dimensions and metrics at ingestion time, enabling sub-second response times for frequent aggregations like terms, histogram, and range.\n Star-Tree Index is designed for real-time analytics and requires no changes to query syntax; OpenSearch automatically uses the optimized path when supported queries are detected. Early benchmarks show faster aggregation performance on large datasets. This makes it ideal for use cases such as observability, personalization, and time-series dashboards. It works best with append-only data and builds during segment refresh/merge, with minimal impact on ingestion throughput. Star-Tree Index is available in all regions where OpenSearch 3.1 is supported. The feature is opt-in and can be enabled at index creation time using composite index settings. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about Star-Tree Index, see the OpenSearch Documentation

Amazon OpenSearch Service announces Derived Source for storage optimization

Amazon OpenSearch Service introduces support for Derived Source, a new feature that can help reduce the amount of storage required for your OpenSearch Service domains. With derived source support, you can skip storing source fields and dynamically derive them when required. \n OpenSearch stores each ingested document in the _source field and also indexes individual fields for search. The _source field can consume significant storage space. To reduce storage use, you can configure OpenSearch to skip storing the _source field and instead reconstruct it dynamically when needed, for example, during search, get, mget, reindex, or update operations. Derived Source is available in all regions where OpenSearch 3.1 is supported. The feature is opt-in and can be enabled at index creation using composite index settings. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about Derived Source, see the OpenSearch documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Big Data Blog

Containers

Artificial Intelligence

AWS Security Blog

Open Source Project

AWS CLI

Amplify for iOS