11/18/2025, 12:00:00 AM ~ 11/19/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon EC2 P6-B300 instances with NVIDIA Blackwell Ultra GPUs are now available
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, accelerated by NVIDIA Blackwell Ultra B300 GPUs. Amazon EC2 P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory. \n P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads.
P6-B300 instances are now available in the p6-b300.48xlarge size through Amazon EC2 Capacity Blocks for ML and Savings Plans in the following AWS Region: US West (Oregon). For on-demand reservation of P6-B300 instances, please reach out to your account manager.
To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Amazon OpenSearch Serverless now adds audit logs for data plane APIs
Amazon OpenSearch Serverless now supports detailed audit logging of data plane requests via AWS CloudTrail. This feature enables customers to record user actions on their collections, helping meet compliance regulations, improve security posture, and provide evidence for security investigations. Customers can now track user activities such as authorization attempts, index modifications, and search queries.\n Customers can use CloudTrail to configure filters for OpenSearch Serverless collections with read-only and write-only options, or use advanced event selectors for more granular control over logged data events. All OpenSearch Serverless data events are delivered to an Amazon S3 bucket and optionally to Amazon CloudWatch Events, creating a comprehensive audit trail. This enhanced visibility into when and who made API calls helps security and operations teams monitor data access and respond to events in real-time. Once configured with CloudTrail, audit logs will be continuously streamed with no additional customer action required. Audit Logs will be continuously streamed to CloudTrail and can be further analyzed there. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
EC2 Auto Scaling now offers a synchronous API to launch instances inside an Auto Scaling group
Today, EC2 Auto Scaling is launching a new API, LaunchInstances, which gives customers more control and flexibility over how EC2 Auto Scaling provisions instances while providing instant feedback on capacity availability.\n Customers use EC2 Auto Scaling for automated fleet management. With scaling policies, EC2 Auto Scaling can automatically add instances when demand spikes and remove them when traffic drops, ensuring customers’ applications always have the right amount of compute. EC2 Auto Scaling also offers the ability to monitor and replace unhealthy instances. In certain use cases, customers may want to specify exactly where EC2 Auto Scaling should launch additional instances and need immediate feedback on capacity availability. The new LaunchInstances API allows customers to precisely control where instances are launched by specifying an override for any Availability Zone and/or subnet in an Auto Scaling group, while providing immediate feedback on capacity availability. This synchronous operation gives customers real-time insight into scaling operations, enabling them to quickly implement alternative strategies if needed. For additional flexibility, the API includes optional asynchronous retries to help reach the desired capacity. This feature is now available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Singapore), at no additional cost beyond standard EC2 and EBS usage. To get started, visit the AWS Command Line Interface (CLI) and the AWS SDKs. To learn more about this feature, visit the AWS documentation.
Amazon Bedrock introduces Priority and Flex inference service tiers
Today, Amazon Bedrock introduces two new inference service tiers to optimize costs and performance for different AI workloads. The new Flex tier offers cost-effective pricing for non-time-critical applications like model evaluations and content summarization while the Priority tier provides premium performance and preferential processing for mission-critical applications. For most models that support Priority Tier, customers can realize up to 25% better output tokens per second (OTPS) latency compared to standard tier. These join the existing Standard tier for everyday AI applications with reliable performance.\n These service tiers address key challenges that organizations face when deploying AI at scale. The Flex tier is designed for non-interactive workloads that can tolerate longer latencies, making it ideal for model evaluations, content summarization, labeling and annotation, and multistep agentic workflow, and it’s priced at a discount relative to the Standard tier. During periods of high demand, Flex requests receive lower priority relative to the Standard tier. The Priority tier is an ideal fit for mission critical applications, real-time end-user interactions, and interactive experiences where consistent, fast responses are essential. During periods of high demand, Priority requests receive processing priority, at a premium price, over other service tiers. These new service tiers are available today for a range of leading foundation models, including OpenAI (gpt-oss-20b, gpt-oss-120b), DeepSeek (DeepSeek V3.1), Qwen3 (Coder-480B-A35B-Instruct, Coder-30B-A3B-Instruct, 32B dense, Qwen3-235B-A22B-2507), and Amazon Nova (Nova Pro and Nova Premier). With these new options, Amazon Bedrock helps customers gain greater control over balancing cost efficiency with performance requirements, enabling them to scale AI workloads economically while ensuring optimal user experiences for their most critical applications.
For more information about the AWS Regions where Amazon Bedrock Priority and Flex inference service tiers are available, see the AWS Regions table
Learn more about service tiers in our News Blog and documentation.
Amazon Polly expands Generative TTS engine with additional languages and region support
Today, we are excited to announce the general availability of five highly expressive Amazon Polly Generative voices in Austrian German (Hannah), Irish English (Niamh), Brazilian Portuguese (Camila), Belgian Dutch (Lisa), and Korean (Seoyeon). This release follows our October launch of Netherlands Dutch (Laura) Generative voice, bringing our total Generative engine offering to thirty-one voices across twenty locales. Additionally, we have expanded the Generative engine to three new regions in Asia Pacific: Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Tokyo).\n Amazon Polly is a fully-managed service that turns text into lifelike speech, allowing developers and builders to enable their applications for conversational AI or for speech content creation. All new and existing Generative voices are now available in the US East (North Virginia), Europe (Frankfurt), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Tokyo) regions. To hear how Polly voices sound, go to Amazon Polly Features. To learn more about how to use Generative engine, go to AWS Blog. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.
Amazon EC2 I7ie instances now available in AWS Asia Pacific (Singapore) Region
Amazon Web Services (AWS) is announcing starting today, Amazon EC2 I7ie instances are now available in Asia Pacific (Singapore) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.\n I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). To learn more, visit the I7ie instances page.
AWS Transfer Family announces Terraform module to automate scanning of transferred files
AWS Transfer Family Terraform module now supports deployment of automated malware scanning workflows for files transferred using Transfer Family resources. This release streamlines centralized provisioning of threat detection workflows using Amazon GuardDuty S3 Protection, helping you meet data security requirements by identifying potential threats in transferred files.\n AWS Transfer Family provides fully managed file transfers over SFTP, AS2, FTPS, FTP, and web browser- based interfaces for AWS storage services. Using the new module, you can programmatically provision workflows to scan incoming files, dynamically route files based on scan results, and generate threat notifications, in a single deployment. You can granularly implement threat detection for specific S3 prefixes while preserving folder structures post scanning, and ensure that only verified clean files reach your business applications and data lakes. This eliminates the overhead and risks associated with manual configurations, and provides a scalable deployment option for data security compliance. Customers can get started by using the new module from the Terraform Registry. To learn more about Transfer Family, visit the product page and user guide. To see all the regions where Transfer Family is available, visit the AWS Region table.
Amazon RDS Optimized Reads now supports R8gd and M8gd database instances
Amazon Relational Database Service (RDS) now supports R8gd and M8gd database instances for Optimized Reads on Amazon Aurora PostgreSQL and RDS for PostgreSQL, MySQL, and MariaDB. R8gd and M8gd database instances offer improved price-performance. For example, Optimized Reads on R8gd instances deliver up to 165% better throughput and up to 120% better price-performance over R6g instances for Aurora PostgreSQL.\n Optimized Reads uses local NVMe-based SSD block storage available on these instances to store ephemeral data, such as temporary tables, reducing data access to/from network-based storage and improving read latency and throughput. The result is improved query performance for complex queries and faster index rebuild operations. Aurora PostgreSQL Optimized Reads instances using the I/O-Optimized configuration additionally use the local storage to extend their caching capacity. Database pages that are evicted from the in-memory buffer cache are cached in local storage to speed subsequent retrieval of that data. Customers can get started with Optimized Reads through the AWS Management Console, CLI, and SDK by modifying their existing Aurora and RDS databases or creating a new database using R8gd or M8gd instances. These instances are available in the US East (N. Virginia, Ohio), US West (Oregon), Europe (Spain, Frankfurt), and Asia Pacific (Tokyo) Regions. For complete information on pricing and regional availability, please refer to the pricing page. For information on specific engine versions that support these DB instance types, please see the Aurora and RDS documentation.
Active threat defense now enabled by default in AWS Network Firewall
Starting today, AWS Network Firewall enables active threat defense by default in alert mode when you create new firewall policies in the AWS Management Console. Active threat defense provides automated, intelligence-driven protection against dynamic, ongoing threat activities observed across AWS infrastructure.\n With this default setting you get visibility into threat activity and indicator groups, types, and threat names you are protected against. You can switch to block mode to automatically prevent suspicious traffic, such as command-and-control (C2) communication, embedded URLs, and malicious domains, or disable the feature entirely. AWS verifies threat indicators to ensure high accuracy and minimize false positives. Active threat defense is available in all regions where AWS Network Firewall is available, including AWS GovCloud (US) and China Regions. To learn more about active threat defense and pricing, see the AWS Network Firewall product page and documentation.
Workshops now available in AWS Builder Center
AWS Builder Center now provides access to the catalog of AWS Workshops, offering step-by-step instructions crafted by AWS experts that explain how to deploy and use AWS services effectively. These workshops cover a wide range of AWS services and use cases, allowing builders to follow guided tutorials within their own AWS accounts. Workshops are designed for builders of all skill levels to gain practical experience and develop solutions tailored to their specific business needs using AWS services.\n The AWS Workshops Catalog features hundreds of workshops with advanced filtering capabilities to quickly find relevant content by category (Machine Learning, Security, Serverless), AWS service (EC2, Lambda, S3), and complexity level (100-Beginner through 400-Expert). Real-time search with partial matching across workshop titles, descriptions, services, and categories helps surface the most relevant content. Catalog content automatically localized based on your Builder Center language preference.
Builders can navigate to the Workshops catalog at builder.aws.com/build/workshops and filter by specific needs—whether you have 1 hour or 8 hours, are a beginner or expert, or want to focus on specific services like Amazon Bedrock and SageMaker. Seamless navigation from Builder Center discovery to the full workshops experience enables hands-on, step-by-step guided learning in your own AWS account.
You can begin exploring Workshops in AWS Builder Center immediately with a free Builder ID. To get started with Workshops, visit AWS Builder Center.
AWS announces flat-rate pricing plans for website delivery and security
Amazon Web Services (AWS) is launching flat-rate pricing plans with no overages for website delivery and security. The flat-rate plans, available with Amazon CloudFront, combine global content delivery with AWS WAF, DDoS protection, Amazon Route 53 DNS, Amazon CloudWatch Logs ingestion, and serverless edge compute into a simple monthly price with no overage charges. Each plan also includes monthly Amazon S3 storage credits to help offset your storage costs.\n CloudFront flat-rate plans allow you to deliver your websites and applications without calculating costs across multiple AWS services. You won’t face the risk of overage charges, even if your website or application goes viral or faces a DDoS attack. Security features like WAF and DDoS protection are enabled by default, and additional configurations are simple to set up. When you serve your AWS applications through CloudFront instead of directly to the internet, your flat-rate plan covers the data transfer costs between your applications and your viewers for a simple monthly price without the worry of overages. This simplified pricing model is available alongside pay-as-you-go pricing for each CloudFront distribution, giving you the flexibility to choose the right pricing model and feature set for each application. Plans are available in Free ($0/month), Pro ($15/month), Business ($200/month), and Premium ($1,000/month) tiers for new and existing CloudFront distributions. Select the plan tier with the features and usage allowances matching your application’s needs. To learn more, refer to the Launch Blog, Plans and Pricing, or CloudFront Developer Guide. To get started, visit the CloudFront console.
Amazon Redshift now supports Just-In-Time (JIT) ANALYZE for Apache Iceberg tables
Amazon Redshift today announces the general availability of Just-In-Time (JIT) ANALYZE capability for Apache Iceberg tables, enabling users to run high performance read and write analytics queries on Apache Iceberg tables within the Redshift data lake. The Apache Iceberg open table format has been used by many customers to simplify data processing on rapidly expanding and evolving tables stored in data lakes.\n Unlike traditional data warehouses, data lakes often lack comprehensive table-level and column-level statistics about the underlying data, making it challenging for query engines to choose the most optimal query execution plans without visibility into the table and column statistics. Sub-optimal query execution plans can lead to slower and less predictable performance. ‘JIT ANALYZE’ is a new Amazon Redshift feature that automatically collects and utilizes statistics for Iceberg tables during query execution, eliminating manual statistics collection while giving the query engine the information it needs to generate optimal query execution plans. The system uses intelligent heuristics to identify queries that will benefit from statistics, maintains lightweight sketch data structures, and builds high quality table-level and column-level statistics. JIT ANALYZE delivers out-of-the-box performance on par with queries that have pre-calculated statistics, while providing the foundation for many other performance optimizations. The Amazon Redshift JIT ANALYZE feature for Apache Iceberg tables is now available in all AWS regions where Amazon Redshift is available. Users do not need to make any changes or enable any settings to take advantage of this new data lake query optimization. To get started, visit the documentation page for Amazon Redshift Management Guide.
AWS announces Supplementary Packages for Amazon Linux
Today, AWS announces the general availability of Supplementary Packages for Amazon Linux (SPAL), a dedicated repository that provides developers and system administrators with streamlined access to thousands of pre-built, EPEL9 packages compatible for Amazon Linux 2023 (AL2023). Amazon Linux serves as the foundation for countless applications running on AWS, but developers often face lengthy processes when building packages from source code. SPAL addresses this challenge by offering ready-to-use packages that accelerate development workflows for teams working with AL2023 environments.\n With SPAL, development teams can significantly reduce deployment times and focus on core application development rather than package compilation. This solution is particularly valuable for DevOps engineers, system administrators, and developers who need reliable packages for production workloads without the overhead of building from source. SPAL packages are derived from the community-maintained EPEL9 project, with AWS providing security patches as they become available upstream. AWS will continue expanding the repository based on customer feedback through the Amazon Linux 2023 GitHub repository. Supplementary Packages for Amazon Linux (SPAL) is available in all AWS Commercial Regions, including the AWS GovCloud (US) Regions and China. To get started, review available packages in the SPAL repository and update your package management configuration to include the SPAL repository for Amazon Linux 2023. To learn more about this feature consult the SPAL FAQs or AWS Documentation.
Amazon RDS for MariaDB now supports community MariaDB minor versions 10.6.24, 10.11.15, and 11.4.9
Amazon Relational Database Service (Amazon RDS) for MariaDB now supports community MariaDB minor versions 10.6.24, 10.11.15, and 11.4.9. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the bug fixes, performance improvements, and new functionality added by the MariaDB community.\n You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MariaDB instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide. Amazon RDS for MariaDB makes it straightforward to set up, operate, and scale MariaDB deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MariaDB. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Amazon RDS for Oracle now supports October 2025 Release Update and Spatial Patch Bundle
Amazon Relational Database Service (Amazon RDS) for Oracle now supports the Oracle October 2025 Release Update (RU) for Oracle Database versions 19c, 21c and the corresponding Spatial Patch Bundle for Oracle Database version 19c. We recommend upgrading to the October 2025 RU as it includes 6 new security patches for Oracle database products. For additional details, refer to Oracle release notes. The Spatial Patch Bundle update delivers important fixes for Oracle Spatial and Graph functionality to provide reliable and optimal performance for spatial operations.\n You can apply the October 2025 Release Update with just a few clicks from the Amazon RDS Management Console, or by using the AWS SDK or CLI. To automatically apply updates to your database instance during your maintenance window, enable Automatic Minor Version Upgrade. You can apply the Spatial Patch Bundle update for new database instances, or upgrade existing instances to engine version ‘19.0.0.0.ru-2025-10.spb-1.r1’ by selecting the “Spatial Patch Bundle Engine Versions” checkbox in the AWS Console. Learn more about upgrading your database instances from the Amazon RDS User Guide.
Amazon FSx for Lustre improves directory listing performance by up to 5x
Amazon FSx for Lustre now delivers up to 5x faster directory listing (ls) performance, allowing you to browse and analyze the contents of your file systems more efficiently.\n Amazon FSx for Lustre is a high-performance, cost-effective, and scalable file storage service for compute-intensive workloads like machine learning training, financial analytics, and high-performance computing. ML researchers, data scientists, and developers who use FSx for Lustre for compute-intensive workloads commonly use their file systems to store data for interactive use cases like home directories and source code repositories. Today’s performance improvement makes FSx for Lustre even faster for these interactive use cases by reducing the time it takes to list and analyze the contents of directories using “ls”. The performance improvements are supported with the latest Lustre 2.15 client in all AWS regions where FSx for Lustre is available. To get started, upgrade to the latest 2.15 client and follow the instructions in the Amazon FSx for Lustre documentation to apply the recommended client tunings.
Amazon MSK Replicator is now available in two additional AWS Regions
You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in Asia Pacific (Hyderabad) and Asia Pacific (Malaysia) Regions.\n MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing. With this launch, MSK Replicator is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), China (Beijing), China (Ningxia), Asia Pacific (Hyderabad), and Asia Pacific (Malaysia). You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. To learn more, visit the MSK Replicator product page, pricing page, and documentation.
Amazon Redshift announces support for the SUPER data type in databases with case insensitive collation, enabling analytics on semi-structured and nested data in these databases. Using the SUPER data type with PartiQL in Amazon Redshift, you can perform advanced analytics that combine structured SQL data (such as string, numeric, and timestamp) with the semi-structured SUPER data (such as JSON) with flexibility and ease-of-use.\n This enhancement allows you to leverage the SUPER data type for your structured and semi-structured data processing needs in databases with case-insensitive collation. Using the COLLATE function, you can now explicitly specify case sensitivity preferences for SUPER columns, providing greater flexibility in handling data with varying case patterns. This is particularly valuable when working with JSON documents, APIs, or application data where case consistency isn’t guaranteed. Whether you’re processing user-defined identifiers or integrating data from multiple sources, you can now perform complex queries across both case-sensitive and case-insensitive data without additional normalization overhead. Amazon Redshift support for the SUPER data type in databases with case insensitive collation is available in all AWS Regions, including the AWS GovCloud (US) Regions, where Amazon Redshift is available. See AWS Region Table for more details. To learn more about the SUPER data type in databases with case insensitive collation, please visit our documentation.
AWS Backup launches a low-cost warm storage tier for Amazon S3 backups
AWS Backup introduced a low-cost warm storage tier for Amazon S3 backup data that can reduce costs by up to 30%. After S3 backup data resides in a vault for 60 days (or longer based on your settings), you can move it to the new low-cost warm storage tier. The low-cost tier provides the same performance and features as the warm storage tier, including ransomware protection, recovery, and auditing.\n Use the new low-cost warm storage tier to reduce storage costs for business, compliance or regulatory data you must retain long-term. With this launch, you can now configure automatic tiering for all S3 backups for all vaults in an account, a specific vault, or a bucket within a vault by setting an age threshold of 60 days or more. When you enable tiering, existing backup data beyond the threshold automatically moves to the low-cost warm tier, delivering immediate cost savings with no action required and no performance impact.
This low-cost storage tier is available in all AWS Regions where AWS Backup for Amazon S3 is available. There is a one-time transition fee when data moves to the low-cost warm tier. For additional pricing information, visit the AWS Backup pricing page.
To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
AWS Lambda adds support for Python 3.14
AWS Lambda now supports creating serverless applications using Python 3.14. Developers can use Python 3.14 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.\n Python 3.14 is the latest long-term support release of Python. This release provides Lambda customers access to the latest Python 3.14 language features. You can use Python 3.14 with Lambda@Edge (in supported Regions), allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (Python), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Python 3.14. You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Python 3.14. The Python 3.14 runtime is available in all Regions, including the AWS GovCloud (US) Regions and China Regions. For more information, including guidance on upgrading existing Lambda functions, read our blog post. For more information about AWS Lambda, visit the product page.
Amazon EC2 I7i instances now available in additional AWS regions
Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in AWS Asia Pacific (Melbourne, Mumbai, Osaka), Middle East (UAE) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.\n I7i instances are ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access small to medium size datasets (multi-TBs). I7i instances support torn write prevention feature with up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.
Safely handle configuration drift with AWS CloudFormation drift-aware change sets
AWS CloudFormation launches drift-aware change sets that can compare an IaC template with the actual state of infrastructure and bring drifted resources in line with their template definitions. Configuration drift occurs when infrastructure managed by IaC is modified via the AWS Management Console, SDK, or CLI. With drift-aware change sets, you can revert drift and keep infrastructure in sync with templates. Additionally, you can preview the impact of deployments on drifted resources and prevent unexpected changes.\n Customers can modify infrastructure outside of IaC when troubleshooting operational incidents. This creates the risk of unexpected changes in future IaC deployments, impacts the security posture of infrastructure, and hampers reproducibility for testing and disaster recovery. Standard change sets can compare a template with your last-deployed template, but do not consider drift. Drift-aware change sets provide a three-way diff between a new template, last-deployed template, and actual infrastructure state. If your diff predicts unintended overwrites of drift, you can update your template values and recreate the change set. During change set execution, CloudFormation will match resource properties with template values and recreate resources deleted outside of IaC. If a provisioning error occurs, CloudFormation will restore infrastructure to its actual state before deployment. To get started, create a change set for an existing stack from the CloudFormation Console and choose “Drift-aware” as the change set type. Alternatively, pass the –deployment-mode REVERT_DRIFT parameter to the CreateChangeSet API from the AWS CLI or SDK. To learn more, visit the CloudFormation User Guide. Drift-aware change sets are available in AWS Regions where CloudFormation is available. Refer to the AWS Region table to learn more.
AWS Blogs
AWS Japan Blog (Japanese)
- Kiro Deployment Guide: Everything you need to know before you get started
- Interview with the 20th Information Crisis Management Contest Team C01UMBA
- [Technical Sponsorship Report] 20th Information Crisis Management Contest
- Results of Python beginners taking on the challenge of short-term programming development with generative AI
- AWS Weekly — 2025/11/10
- Learn how to build operations management using AI in re:Invent 2025
- Announcement of Kiroweeeeeeek in Japan
AWS News Blog
- New Amazon Bedrock service tiers help you match AI workload performance with cost
- Accelerate large-scale AI applications with the new Amazon EC2 P6-B300 instances
AWS Architecture Blog
AWS Big Data Blog
AWS Compute Blog
AWS Contact Center
Containers
AWS DevOps & Developer Productivity Blog
AWS HPC Blog
AWS for Industries
Artificial Intelligence
- Bringing tic-tac-toe to life with AWS AI services
- HyperPod enhances ML infrastructure with security and storage
- Accelerating generative AI applications with a platform engineering approach
Networking & Content Delivery
AWS Quantum Technologies Blog
AWS Security Blog
- Analyze AWS Network Firewall logs using Amazon OpenSearch dashboard
- How to automate Session Manager preferences across your organization