9/8/2025, 12:00:00 AM ~ 9/9/2025, 12:00:00 AM (UTC)

Recent Announcements

AWS WAF is now available in the AWS Asia Pacific (Taipei) Region

Starting today, AWS WAF is available in the AWS Asia Pacific (Taipei) Region.\n AWS WAF is a web application firewall that helps you protect your web application resources against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. 

To see the full list of regions where AWS WAF is currently available, visit the AWS Region Table. Please note that AWS WAF Bot Control with targeted level of inspection and Anti-DDoS managed rule group are not currently available in this region. For more information about the service, visit the AWS WAF page. For more information about pricing, visit the AWS WAF Pricing page.

AWS WAF now includes free WAF Vended Logs based on request volume

AWS WAF now includes 500 MB of CloudWatch Logs Vended Logs Ingestion for every 1 million WAF requests processed, at no additional cost. This helps customers better manage their WAF logging costs while maintaining comprehensive security visibility.\n WAF logs in CloudWatch provide valuable insights for security analysis, compliance, and troubleshooting. Customers can leverage CloudWatch’s advanced analytics capabilities, including Log Insights queries, anomaly detection, and dashboards, to monitor and analyze their web application traffic patterns and security events. The included logs allocation is automatically applied based on WAF requests usage on your AWS bill at month end, making it easy to take advantage of the new pricing. The free WAF logs allocation is across WAF specific Vended Logs to CloudWatch, S3, and Firehose. Usage beyond the included 500 MB per 1 million WAF requests will be charged at AWS WAF specific Vended Logs pricing in CloudWatch. For pricing details, please visit the AWS WAF pricing page. To learn more about WAF logging capabilities and how to get started, visit the AWS WAF documentation.

Announcing Managed Tiered Checkpointing for Amazon SageMaker HyperPod

Today, Amazon Web Service (AWS) announces the general availability of managed tiered checkpointing for Amazon SageMaker HyperPod, a new capability designed to reduce model recovery time and minimize loss in training progress. As AI training scales, the likelihood of infrastructure failures increases, making efficient checkpointing critical. Traditional checkpointing methods can be slow and resource-intensive, especially for large models. SageMaker HyperPod’s managed tiered checkpointing addresses this by using CPU memory to store frequent checkpoints for rapid recovery, while periodically persisting data to Amazon S3 for long-term durability. This hybrid approach minimizes training loss and significantly reduces the time to resume training after a failure.\n With managed tiered checkpointing organizations can train reliably, with high throughput on large-scale clusters. The solution allows customers to configure checkpoint frequency and retention policies across both in-memory and persistent storage tiers. By storing frequently in memory customers can recover quickly while minimizing storage costs. Integrated with PyTorch’s Distributed Checkpoint (DCP), customers can easily implement checkpointing with only a few lines of code, while gaining the performance benefits of in-memory storage. This feature is currently available for SageMaker HyperPod clusters using the EKS orchestrator. Customers can enable managed tiered checkpointing by specifying an API parameter when creating or updating a HyperPod cluster via the CreateCluster or UpdateCluster API. Customers can then use the sagemaker-checkpointing python library to implement managed tiered checkpointing with minimal code changes to their training scripts. Managed tiered checkpointing is available in all regions where SageMaker HyperPod is currently available. To learn more, please refer to the blog post and documentation.

Amazon Neptune Analytics is now supported as a graph store in NetworkX

Today, we are announcing a new capability that NetworkX now supports Neptune Analytics as a graph store. With this release, developers can continue to use familiar NetworkX APIs while automatically offloading graph algorithm workloads to Neptune’s scalable, high-performance analytics engine. This makes it simple to scale graph computations on demand without refactoring code, combining the ease of local development with the performance and elasticity of a fully managed AWS service.\n Previously, when datasets grew beyond the limits of a local environment, users had to turn to third-party services—rebuilding their graph models to fit proprietary formats, exporting and importing data, and learning entirely new systems. With the new nx-neptune integration, developers only need an AWS account and credentials; the solution automatically handles graph data modeling, data movement (Zero-ETL), and infrastructure management. It provisions a Neptune Analytics instance, runs the requested algorithm, returns results directly to the user, and then tears down the infrastructure for a cost-effective, serverless-like experience—all without requiring the user to leave their familiar Python workflow. NetworkX is a widely used open-source Python library for creating, analyzing, and visualizing complex graphs. It offers an extensive collection of graph algorithms and utilities, making it a popular choice among researchers, data scientists, and developers for prototyping and experimenting with graph-based applications. To learn more about the Neptune–NetworkX Integration, visit the documentation.

Amazon CloudFront announces support for IPv6 origins

Amazon CloudFront expands its IPv6 capabilities by introducing support for IPv6 connectivity to origin servers, allowing customers to implement end-to-end IPv6 content delivery for their web applications. Support for IPv6 origins enables customers to send IPv6 traffic all the way to their origins, allowing them to meet their architectural and regulatory requirements for IPv6 adoption. End-to-end IPv6 support improves network performance for end users connecting over IPv6 networks, and also removes concerns for IPv4 address exhaustion for origin infrastructure.\n Previously, CloudFront only supported IPv4 connectivity to origins, despite accepting IPv6 connections from end users. Customers using CloudFront can configure their custom origins to use IPv4-only (default), IPv6-only, or dual-stack connectivity. When using dual-stack, CloudFront will automatically choose between IPv4 and IPv6 addresses to ensure even distribution of traffic towards origin over both. Customers can configure IPv6 origins in all supported AWS Commercial Regions. Customers can configure IPv6-only or dual-stack origins with CloudFront, excluding Amazon S3 and VPC origins. To learn more IPv6 support with CloudFront, visit the CloudFront documentation.

Amazon SageMaker Unified Studio announces the general availability of the Custom Blueprints

Today, AWS announced the general availability of Custom Blueprints, a new feature in Amazon SageMaker Unified Studio, part of the next generation of Amazon SageMaker. This feature allows customers to use their own managed policies as per their corporate security requirements to create a project role in SageMaker Unified Studio. Customers can either replace the managed policies provide by Amazon SageMaker Unified Studio as part of the tooling blueprint with their custom policies or enrich the existing policies by appending additional policies.\n In addition to allowing you to bring your own managed policies, Custom Blueprints is designed to provide you the ability to configure the infrastructure and resources that you want to deploy in the project created in Amazon SageMaker Unified Studio. Using your own AWS CloudFormation templates you can define and customize the parameters and configuration for any AWS resources such as Amazon EMR on EC2, AWS Glue Data Catalog, and Amazon Redshift. You can replace the service managed blueprints with your custom blueprints in order to ensure standardization across your entire organization. The sample templates to create your custom blueprints are available here. The ability to use Custom Blueprint is available in all AWS Commercial Regions where the next generation of Amazon SageMaker is available. See the supported regions list for more details. For instructions on how to get started, visit the Amazon SageMaker documentation.

Introducing improved AI assistance in Amazon SageMaker Unified Studio

Today, we are announcing improvements to the Amazon Q Developer chat experience in Amazon SageMaker Unified Studio Jupyter notebooks and adding Amazon Q Developer in the command line in Jupyter notebooks and Code Editor. By integrating with Model Context Protocol (MCP) servers, Amazon Q Developer is aware of your SageMaker Unified Studio project resources, including data, compute, and code, and provides personalized assistance for data engineering and machine learning development work.\n These new capabilities provide highly relevant responses to assist with tasks like code refactoring, file modification, and troubleshooting. This helps data scientists and data engineers quickly set up their integrated development environments and work more efficiently while maintaining transparency into how the AI assistant is acting on their behalf. These features are available at no additional cost with the Amazon Q Developer Free Tier in all AWS Regions where Amazon SageMaker Unified Studio is available. To make even more use of these features, we recommend enabling Amazon Q Developer Pro. To do so, please refer to the documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

AWS Compute Blog

AWS Database Blog

AWS DevOps & Developer Productivity Blog

Artificial Intelligence

AWS for M&E Blog

Networking & Content Delivery

AWS Quantum Technologies Blog

AWS Storage Blog

Open Source Project

AWS CLI

Amplify for iOS

Amplify for Android