1/21/2026, 12:00:00 AM ~ 1/22/2026, 12:00:00 AM (UTC)

Recent Announcements

Amazon SageMaker HyperPod introduces enhanced lifecycle scripts debugging

Amazon SageMaker HyperPod now provides enhanced troubleshooting capabilities for lifecycle scripts, making it easier to identify and resolve issues during cluster node provisioning. SageMaker HyperPod helps you provision resilient clusters for running AI/ML workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs).\n When lifecycle scripts encounter issues during cluster creation or node operations, you now receive detailed error messages that include the specific CloudWatch log group and log stream names where you can find execution logs for lifecycle scripts. You can view these error messages by running the DescribeCluster API or by viewing the cluster details page in the SageMaker console. The console also provides a “View lifecycle script logs” button that navigates directly to the relevant CloudWatch log stream, making it easier to locate logs. Additionally, CloudWatch logs for lifecycle scripts now include specific markers to help you track lifecycle script execution progress, including indicators for when the lifecycle script log begins, when scripts are being downloaded, when downloads complete, and when scripts succeed or fail. These markers help you quickly identify where issues occurred during the provisioning process. These enhancements reduce the time required to diagnose and fix lifecycle script failures, helping you get your HyperPod clusters up and running faster. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more, see SageMaker HyperPod cluster management in the Amazon SageMaker Developer Guide.

AWS Clean Rooms adds support for join and partition hints in SQL

AWS Clean Rooms announces support for join and partition hints for SQL queries, enabling optimization of join strategies and data partitioning for improved query performance and reduced costs. This launch enables you to apply SQL hints to your queries using comment-style syntax in pre-approved analysis templates as well as ad hoc SQL queries. You can now optimize large table joins using a broadcast join hint and you can improve data distribution with partition hints for better parallel processing. For example, a measurement company analyzing how many households viewed a live sports event uses a broadcast join hint on their lookup table to improve query performance and reduce costs.\n With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

Amazon Connect can now automatically select random samples of agent contacts for evaluation

Amazon Connect can now provide managers with random samples of agent contacts for evaluation, so they can provide fair coaching feedback to agents. Managers can specify how many contacts they need to review per agent, as per union agreements, regulations, or internal guidelines. They then receive the required number of contacts randomly selected from the specified timeframe, for example, 3 contacts per agent from the last week. Additionally, managers can use new filters to ensure that the selected contacts are suitable for evaluation, such as those with audio or screen recordings, transcripts, and exclude previously evaluated contacts.\n This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage.

Amazon EMR Serverless now supports AWS KMS customer managed keys for encrypting local disks

Amazon EMR Serverless now supports encrypting local disks with AWS Key Management Service (KMS) customer managed keys (CMKs). You can now meet strict regulatory and compliance requirements with additional encryption options beyond default AWS-owned keys, giving you greater control over your encryption strategy.\n Amazon EMR Serverless is a deployment option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Local disks on EMR Serverless workers are encrypted by default using AWS-owned keys. With this launch, customers who have strict regulatory and compliance needs can encrypt local disks with AWS KMS customer managed keys (CMKs) in the same account or from another account. This integration is supported on new or existing EMR Serverless applications and on all supported EMR release versions. You can specify the AWS KMS customer managed key at the application level where it applies to all workloads submitted on the application or you can specify the AWS KMS customer managed key for a specific job run or interactive session. This feature is available in all supported EMR Releases and in all AWS Regions where Amazon EMR Serverless is available including AWS GovCloud (US) and China regions. To learn more, see Local Disk Encryption with AWS KMS CMK in the Amazon EMR Serverless User Guide.

Amazon Bedrock Reserved Tier available now for Claude Sonnet 4.5 in AWS GovCloud (US-West)

Today, Amazon Bedrock introduces the expansion of the Reserved service tier designed for workloads requiring predictable performance and guaranteed tokens-per-minute capacity. The Reserved tier provides the ability to reserve prioritized compute capacity, keeping service levels predictable for your mission critical applications. It also includes the flexibility to allocate different input and output tokens-per-minute capacities to match the exact requirements of your workload and control cost. This is particularly valuable because many workloads have asymmetric token usage patterns. For instance, summarization tasks consume many input tokens but generate fewer output tokens, while content generation applications require less input and more output capacity. When your application needs more tokens-per-minute capacity than what you reserved , the service automatically overflows to the pay-as-you-go Standard tier, ensuring uninterrupted operations. The Reserved tier is available today for Anthropic Claude Sonnet 4.5 in AWS GovCloud (US-West). Customers can reserve capacity for 1 month or 3 month duration. Customers pay a fixed price per 1K tokens-per-minute and are billed monthly. Amazon Bedrock Reserved Tier is available for customers in AWS GovCloud (US-West) via GOV-CRIS cross-region profile.\n With the expansion of the Reserved service tier, Amazon Bedrock continues to provide more choice to customers, helping them develop, scale, and deploy applications and agents that improve productivity and customer experiences while balancing performance and cost requirements. For more information about the AWS Regions where Amazon Bedrock Reserved tier is available, refer to the Documentation. To get access to the Reserved tier, please contact your AWS account team.

Amazon EC2 C8gn instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Region Asia Pacific (Mumbai), Africa (Cape Town), Europe (Ireland, London), Canada West (Calgary). The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. \n   Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference.    For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.    C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm, Ireland, London), Asia Pacific (Singapore, Malaysia, Sydney, Thailand, Mumbai), Middle East (UAE), Africa (Cape Town), Canada West (Calgary).    To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

AWS introduces additional policy details to access denied error messages

AWS now includes the AWS Identity and Access Management (IAM) and AWS Organizations policy’s Amazon Resource Name (ARN) in access denied error messages in same account and same organization scenarios. This allows you to quickly identify the exact policy responsible for the denied access and take action to troubleshoot the issue.\n Before this launch, customers had to identify the root cause of access denied errors based only on the policy type in the error message. This launch expedites troubleshooting when you have multiple policies of the same type, as you can directly see which policy to address for explicit deny cases. The error message now includes the policy ARN for Service Control Policies (SCP), Resource Control Policies (RCP), identity-based policies, session policies, and permission boundaries. This additional context will gradually become available across AWS services in all AWS regions. To learn more, refer to IAM documentation.

Instance Scheduler on AWS adds enhanced scaling, reliability, and event-driven automation

Today AWS announced enhanced scheduling orchestration to track AWS tagging events, self-service troubleshooting via informational resource tags, an optional EC2 insufficient-capacity retry flow using alternate instance types, and automatic creation of a dedicated EventBridge bus for scheduling events for Instance Scheduler (IS) on AWS. IS’s orchestration and fan-out mechanisms have been re-architected to enable customers to track AWS tagging events, allowing the product to more intelligently sequence and distribute scheduling operations - improving scaling performance and addressing cost-scaling concerns. The product now enables distributed cloud engineer personas to perform self-service troubleshooting in their spoke accounts through informational tags applied to their resources without relying on a central cloud administrator. In addition, an optional Insufficient Capacity Error Retry flow has been added to automatically retry failed start actions using alternate instance types when EC2 encounters insufficient capacity errors, ensuring workloads start reliably even in constrained Availability Zones or regions. Lastly, Instance Scheduler on AWS now automatically creates a dedicated EventBus for scheduling-related events, streamlining integrations and automation workflows.\n This update improves Instance Scheduler’s scalability, reduces operational overhead, and increases workload reliability across complex customer environments. You can accelerate issue resolution and boost operational efficiency by empowering distributed cloud engineers to troubleshoot independently. You can enhance overall workload resilience by improving handling of EC2 capacity shortages and simplify integrations by streamlining event routing through the new EventBus to support more extensible automation workflows.

To learn more about Instance Scheduler, visit the Product Page or contact your AWS account team.

AWS Transfer Family Terraform module now supports web apps

You can now use the AWS Transfer Family Terraform module to deploy Transfer Family web apps, which enable end users to transfer files to and from Amazon S3 over a web interface. This release streamlines centralized provisioning of web apps with federated authentication and user access controls, enabling consistent, repeatable deployments through infrastructure as code.\n With Transfer Family web apps, you can provide your workforce with a fully managed, branded web portal to browse, upload, and download data in S3. In a single deployment, this module allows you to programmatically provision your web apps that authenticate users through AWS IAM Identity Center using your existing identity provider and Amazon S3 Access Grants for fine-grained user permissions. An included end-to-end example shows you how to assign and optionally create IAM Identity Center users and groups, configure S3 Access Grants, set up the web app, and enable security auditing through Amazon CloudTrail. You can get started by downloading the new module from the Terraform Registry. To learn more about Transfer Family web apps, visit the user guide. To see all the regions where Transfer Family web apps is available, visit the AWS Region table.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Big Data Blog

AWS Database Blog

Artificial Intelligence

AWS Quantum Technologies Blog

Open Source Project

AWS CLI