4/10/2025, 12:00:00 AM ~ 4/11/2025, 12:00:00 AM (UTC)
Recent Announcements
Load Balancer Capacity Unit Reservation for Gateway Load Balancers
Gateway Load Balancer (GWLB) now supports Load Balancer Capacity Unit (LCU) Reservation that allows you to proactively set a minimum bandwidth capacity for your load balancer, complementing its existing ability to auto-scale based on your traffic pattern.\n Gateway Load Balancer helps you deploy, scale, and manage third-party virtual appliances. With this feature, you can reserve a guaranteed capacity for anticipated traffic surge. The LCU reservation is ideal for scenarios such as onboarding and migrating new workload to your GWLB gated services without the need to wait for organic scaling, or maintaining a minimum bandwidth capacity for your firewall applications to meet specific SLA or compliance requirements. When using this feature, you pay only for the reserved LCUs and any additional usage above the reservation. You can easily configure this feature through the ELB console or API. The feature is available for GWLB in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) AWS Regions. This feature is not supported on Gateway Load Balancer Endpoint (GWLBe). To learn more, please refer to the GWLB documentation.
Amazon S3 Express One Zone reduces storage and request prices
Starting today, Amazon S3 Express One Zone has reduced pricing for storage by 31%, PUT requests by 55%, and GET requests by 85%. In addition, S3 Express One Zone has reduced its per-gigabyte data upload and retrieval charges by 60% and now applies these charges to all bytes rather than just portions of requests exceeding 512 kilobytes.\n Amazon S3 Express One Zone is a high-performance, single-Availability Zone storage class purpose-built to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications, such as machine learning training, analytics for live streaming events, and market analysis for financial services. These pricing changes apply to S3 Express One Zone in all AWS Regions where the storage class is available. For updated pricing information, visit the S3 pricing page. To learn more about these pricing reductions, read the AWS News Blog, and to learn more about the S3 Express One Zone storage class, visit the product page and S3 User Guide.
Amazon Bedrock Knowledge Bases now extends support for hybrid search to knowledge bases created using Amazon Aurora PostgreSQL and MongoDB Atlas vector stores. This capability, which can improve relevance of the results, previously only worked with Opensearch Serverless and Opensearch Managed Clusters in Bedrock Knowledge Bases.\n Retrieval augmented generation (RAG) applications use semantic search, based on vectors, to search unstructured text. These vectors are created using foundation models to capture contextual and linguistic meaning within data to answer human-like questions. Hybrid search merges semantic and full-text search methods, executing dual queries and combining results. This approach improves results relevance by retrieving documents that match conceptually from semantic search or that contain specific keywords found in full-text search. The wider search scope enhances result quality, particularly for keyword-based queries. You can enable hybrid search through the Knowledge Base APIs or through the Bedrock console. In the console, you can select hybrid search as your preferred search option within Knowledge Bases, or choose the default search option to use semantic search only. Hybrid search with Aurora PostgreSQL is available in all AWS Regions where Bedrock Knowledge Bases is available, excluding Europe (Zurich) and GovCloud (US) Regions. Hybrid search with Mongo DB Atlas is available in the US West (Oregon) and US East (N. Virginia) AWS Regions. To learn more, refer to Bedrock Knowledge Bases documentation. To get started, visit the Amazon Bedrock console.
AWS Compute Optimizer now supports 57 new Amazon EC2 instance types
AWS Compute Optimizer now supports 57 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types. The newly supported instance types include the latest generation accelerated computing instances (P5e, P5en, G6e), storage optimized instances (I7ie, I8g), and compute optimized instances (M8g), as well as high memory instances (U7i) and new instance sizes for C7i-flex and M7i-flex. With these newly supported instance types, AWS Compute Optimizer delivers recommendations to help you identify cost and performance optimization opportunities across a wider range of EC2 instance types, helping you improve performance and cost savings for your workloads.\n This new feature is available in all AWS Regions where AWS Compute Optimizer is available, except the AWS GovCloud (US) and the China Regions. For more information about Compute Optimizer, visit our product page and documentation. You can start using AWS Compute Optimizer through the AWS Management Console, AWS Services CLI, or AWS SDK.
IAM Identity Center has released a new SDK plugin that simplifies AWS resource authorization for applications that authenticate with external identity providers (IdPs) such as Microsoft EntraID, Okta, and others. The plugin which supports trusted identity propagation (TIP), streamlines how external IdP tokens are exchanged for IAM Identity Center tokens. These tokens enable precise access control to AWS resources (e.g., Amazon S3 buckets) leveraging user and group memberships as defined in the external IdP.\n The new SDK plugin automates the token exchange process eliminating the need for complex, custom-built workflows. Once configured, it seamlessly handles the IAM Identity Center token creation and the generation of user identity-aware credentials. These credentials can be used for creating identity-aware IAM role sessions while requesting access to different AWS resources. Currently available for Java 2.0 and JavaScript v3 SDK, this TIP plugin is AWS’s recommended solution for implementing user identity-aware authorization. IAM Identity Center enables you to connect your existing source of workforce identities to AWS once, and access the personalized experiences offered by AWS applications such as Amazon Q, define and audit user identity-aware access to data in AWS services, and manage access to multiple AWS accounts from a central place. For instructions on installation of this plug-in, see here. For an example of how Amazon Q business developers can integrate into this plugin to build user identity-aware GenAI experiences, see here. This plugin is available at no additional cost in all AWS Regions where IAM Identity Center is supported.
Introducing two new Amazon EC2 I7ie bare metal instances sizes
Today, Amazon Web Services (AWS) announces the launch of two new EC2 I7ie bare metal instances. These instances are now available in US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, London), and Asia Pacific (Tokyo) regions. The I7ie instances feature 5th generation Intel Xeon Scalable processors with a 3.2GHz all-core turbo frequency. Compared to I3en instances, they deliver 40% better compute performance and 20% better price performance. I7ie instances offer up to 120TB local NVMe storage density (highest in the cloud) for storage optimized instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.\n EC2 bare metal instances provide direct access to the 5th generation Intel Xeon Scalable processor and memory resources. They allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads incompatible with virtual environments, and licensing-restricted business critical applications. These instances feature three Intel accelerator technologies: Intel Data Streaming accelerator (DSA), Intel In-Memory Analytics Accelerator (IAA), and Intel QuickAssist Technology (QAT). These accelerators optimize workload performance through efficient data operation offloading and acceleration. I7ie instances offer metal-24xl and metal-48xl sizes with 96 and 192 vCPUs respectively and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). To learn more, visit the I7ie instances page.
Amazon EC2 R6id instances are now available in Europe (Spain) region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R6id instances are available in Europe (Spain) Region. These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage. R6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage to scale performance of applications such data logging, distributed web-scale in-memory caches, in-memory databases, and real-time big data analytics.\n These instances are generally available today in the US East (Ohio, N.Virginia), US West (Oregon), Canada West (Calgary), Mexico (Central), Asia Pacific (Malaysia, Mumbai, Seoul, Singapore, Sydney, Thailand, Tokyo), Europe (Frankfurt, Ireland, London, Spain), Israel (Tel Aviv), and AWS GovCloud (US-West) Regions. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To learn more, see Amazon R6id instances. To get started, visit AWS Command Line Interface (CLI), and AWS SDKs.
Amazon Lex adds ability to control intent switching during conversations
Amazon Lex now allows you to disable automatic intent switching during slot elicitation using request attributes. This new capability gives you more control over conversation flows by preventing unintended switches between intents while gathering required information from users. The feature helps maintain focused conversations and reduces the likelihood of interrupting the process.\n This enhancement is particularly valuable for complex conversational flows where completing the current interaction is crucial before allowing transitions to other intents. By setting certain attributes, you can ensure that your bot stays focused on collecting all necessary slots, or conformations for the current intent, even if the user’s utterance matches another intent with higher confidence. This helps create more predictable and controlled conversation experiences, especially in scenarios like multi-step form filling or sequential information gathering. This feature is supported for all Lex supported languages and is available in all AWS Regions where Amazon Lex operates. To learn more about controlling intent switching behavior, please reference the Lex V2 Developer Guide.
AWS Transfer Family introduces additional configuration options for SFTP connectors
AWS Transfer Family announces new configuration options for SFTP connectors, providing you more flexibility and performance when connecting with remote SFTP servers. These enhancements include support for OpenSSH key format for authentication, ability to discover remote server’s host key for validating server identity, and ability to perform concurrent remote operations for improved transfer performance.\n SFTP connectors provide a fully managed and low-code capability to copy files between remote SFTP servers and Amazon S3. You can now authenticate connections to remote servers using OpenSSH keys, in addition to the existing option of using PEM-formatted keys. Your connectors can now scan the remote servers for their public host keys that are used to validate the host identity, eliminating the need for manual retrieval of this information from server administrators. To improve transfer performance, connectors can now create up to five parallel connections with remote servers. These enhancements provide you greater control when connecting with remote SFTP servers to execute file operations. The new configuration options for SFTP connectors are available in all AWS Regions where the Transfer Family is available. To learn more about SFTP connectors, visit the documentation. To get started with Transfer Family’s SFTP offerings, take the self-paced SFTP workshop.
Amazon Managed Service for Apache Flink is now available in the Mexico (Central) Region
Starting today, customers can use Amazon Managed Service for Apache Flink in the Mexico (Central) Region to build real-time stream processing applications.\n Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors. You can learn more about Amazon Managed Service for Apache Flink here. For Amazon Managed Service for Apache Flink region availability, refer to the AWS Region Table.
Amazon EC2 M6id instances are now available in US West (N. California) region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6id instances are available in US West (N. California) Region. These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage.\n M6id instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage to scale performance of applications such data logging, distributed web-scale in-memory caches, in-memory databases, and real-time big data analytics. These instances are generally available today in the US East (Ohio, N. Virginia), US West (Oregon, N. California), Canada West (Calgary), Canada (Central), Mexico (Central), South America (Sao Paulo), Asia Pacific (Tokyo, Sydney, Seoul, Singapore, Malaysia, Mumbai, Thailand), Europe (Zurich, Ireland, Frankfurt, London), Israel (Tel Aviv) Regions. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit our product page for M6id.
Amazon RDS for SQL Server supports new minor versions for SQL Server 2019 and 2022
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports new minor versions for SQL Server 2019 (CU32 - 15.0.4430.1) and SQL Server 2022 (CU18 - 16.0.4185.3). These minor versions include performance improvements and bug fixes, and are available for SQL Server Express, Web, Standard, and Enterprise editions. Review the Microsoft release notes for CU32 and CU18 for details.\n We recommend that you upgrade to the latest minor versions to benefit from the performance improvements and bug fixes. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. These minor versions are available in all AWS regions where Amazon RDS for SQL Server is available. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
AWS CodeBuild adds Node 22, Python 3.13 and Go 1.24 to Lambda Compute images
AWS CodeBuild now supports Node 22, Python 3.13, Go 1.24 and Ruby 3.4 in Lambda Compute. These new runtime versions are available in both x86_64 and aarch64 architectures. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.\n The new Lambda Compute runtime versions are available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (São Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt). To learn more about runtime versions provided by CodeBuild, please visit our documentation. To learn more about CodeBuild’s Lambda Compute mode, see CodeBuild’s documentation for Running builds on Lambda.
Announcing horizontal autoscaling in Amazon ElastiCache for Memcached
Amazon ElastiCache for Memcached now supports horizontal autoscaling enabling you to automatically adjust capacity of your self-designed Memcached caches without the need for manual intervention. ElastiCache for Memcached leverages AWS Application Auto Scaling to manage the scaling process and Amazon CloudWatch metrics to determine when to scale in or out, ensuring your Memcached caches maintain steady, predictable performance at the lowest possible cost.\n Hundreds and thousands of customers use ElastiCache to improve their database and application performance and optimize costs. ElastiCache for Memcached supports target tracking and scheduled auto scaling policies. With target tracking, you define a target metric and ElastiCache for Memcached adjusts resource capacity in response to live changes in resource utilization. For instance, when memory utilization rises, ElastiCache for Memcached will add nodes to your cache to increase memory capacity and reduce utilization back to the target level. This enables your cache to adjust capacity automatically to maintain high performance. Conversely, when memory utilization drops below the target amount, ElastiCache for Memcached will remove nodes from your cache to reduce over-provisioning and lower costs. With scheduled scaling, you can set specific days and times for ElastiCache to scale your cache to accommodate predictable workload capacity changes. Horizontal autoscaling on ElastiCache for Memcached is now available in all AWS commercial regions. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page and documentation.
Announcing vertical scaling in Amazon ElastiCache for Memcached
Today, Amazon ElastiCache introduces the ability to perform vertical scaling on self-designed Memcached caches on ElastiCache. Amazon ElastiCache is a fully managed, Valkey-, Memcached- and Redis OSS-compatible service that delivers real-time, cost-optimized performance for modern applications with 99.99% availability. With this launch, you can now dynamically adjust the compute and memory resources of your ElastiCache for Memcached clusters, providing greater flexibility and scalability.\n Hundreds and thousands of customers use ElastiCache to improve their database and application performance and optimize costs. With vertical scaling on ElastiCache for Memcached, you can now seamlessly scale up or down your Memcached instances to match your application’s changing workload demands without disrupting your cluster architecture. You can scale up to boost performance and increase cache capacity during high-traffic periods, or scale down to optimize costs when demand is low. This enables you to align your caching infrastructure with your evolving application needs, enhancing cost efficiency and improving resource utilization. Vertical scaling on ElastiCache for Memcached is now available in all AWS regions. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page and documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- Amplify Hosting announces skew protection
- AWS supports Scuderia Ferrari HP to optimize the assembly process for Formula 1® power units
- AWS Weekly Review: Amazon EKS, Amazon OpenSearch, Amazon API Gateway, etc. (April 7, 2025)
AWS News Blog
AWS Big Data Blog
- Build unified pipelines spanning multiple AWS accounts and Regions with Amazon MWAA
- Integrate ThoughtSpot with Amazon Redshift using AWS IAM Identity Center
AWS Contact Center
AWS Database Blog
AWS DevOps & Developer Productivity Blog
Integration & Automation
AWS Machine Learning Blog
- Reduce ML training costs with Amazon SageMaker HyperPod
- Model customization, RAG, or both: A case study with Amazon Nova
- Generate user-personalized communication with Amazon Personalize and Amazon Bedrock
- Automating regulatory compliance: A multi-agent solution using Amazon Bedrock and CrewAI
- Pixtral Large is now available in Amazon Bedrock