9/2/2025, 12:00:00 AM ~ 9/3/2025, 12:00:00 AM (UTC)

Recent Announcements

Amazon Bedrock now available in the Asia Pacific (Jakarta) Region

Beginning today, customers can use Amazon Bedrock in the Asia Pacific (Jakarta) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.\n Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models and other FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities, such as Guardrails and Model customization, that customers need to build generative AI applications with security, privacy, and responsible AI built into Amazon Bedrock. These capabilities help customers build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

AWS adds the ability to centrally manage access to AWS Regions and AWS Local Zones

Today, AWS announces the ability to manage access to AWS Regions and AWS Local Zones from a single place within the AWS Management Console. With this new capability, customers can now efficiently monitor and manage access to AWS Regions and AWS Local Zones globally.\n AWS Global View enables customers to view resources across multiple Regions in a single console. To get started, customers can find “AWS Global View” in the AWS Management Console and navigate to the Regions and Zones page. The Regions and Zones page displays infrastructure location details, opt-in status, as well as any parent Region relationships, making it easier for customers to manage and monitor their global AWS footprint. This capability is available in all AWS commercial Regions. To learn more, visit the AWS Global View documentation, or navigate to the Regions and Zones page here.

Amazon CloudWatch Synthetics adds multi-browser support for application testing

Amazon CloudWatch Synthetics now enables customers to test and monitor web applications in Firefox, in addition to existing Chrome support. This enhancement helps customers ensure consistent functionality and performance across different browsers, making it easier to identify browser-specific issues before they impact end users.\n With this launch, you can run the same canary script across Chrome and Firefox when using Playwright-based canaries or Puppeteer-based canaries. CloudWatch Synthetics automatically collects browser-specific performance metrics, success rates, and visual monitoring results while maintaining an aggregate view of overall application health. This helps development and operations teams quickly identify and resolve browser compatibility issues that could affect application reliability. Multi-browser support is available in all commercial AWS Regions. To learn more about configuring multi-browser canaries, see the canary docs in the Amazon CloudWatch Synthetics User Guide.

AWS Direct Connect announces 100G expansion in Lagos, Nigeria

Today, AWS announced the expansion of 10 Gbps and 100 Gbps dedicated connections with MACsec encryption capabilities at the existing AWS Direct Connect location in the Rack Centre LGS1 data center near Lagos, Nigeria. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location.\n The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. For more information on the over 145 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.

Amazon RDS for Oracle is now available with Oracle Database Standard Edition 2 (SE2) License Included instances in Asia Pacific (Thailand) and Mexico (Central) regions

Amazon Relational Database Service (Amazon RDS) for Oracle now offers Oracle Database Standard Edition 2 (SE2) License Included R7i and M7i instances in Asia Pacific (Thailand) and Mexico (Central) regions.\n With Amazon RDS for Oracle SE2 License Included instances, you do not need to purchase Oracle Database licenses. You simply launch Amazon RDS for Oracle instances through the AWS Management Console, AWS CLI, or AWS SDKs, and there are no separate license or support charges. Review the AWS blog Rethink Oracle Standard Edition Two on Amazon RDS for Oracle to explore how you can lower cost and simplify operations by using Amazon RDS Oracle SE2 License Included instances for your Oracle databases. To learn more about pricing and regional availability, see Amazon RDS for Oracle pricing.

AWS Direct Connect announces new location in Auckland, New Zealand

Today, as part of the launch of the AWS Asia Pacific (New Zealand) Region, AWS announced the opening of a new AWS Direct Connect location within the Spark Digital Mayoral Drive Exchange (MDR) data center near Auckland, New Zealand. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. This Direct Connect location offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.\n The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.  For more information on the over 144 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.

Announcing a new open source project for scenario-focused AWS CLI scripts

Amazon Web Services (AWS) is launching a new collection of developer-focused resources for the AWS Command Line Interface (AWS CLI). These resources demonstrate working end-to-end shell scripts for working with AWS services and best practices that simplify the process of authoring shell scripts that handle errors, track created resources, and perform cleanup operations.\n The new AWS Developer Tutorials project on GitHub provides a library of tested, scenario-focused AWS CLI scripts covering over 60 AWS services. These tutorials provide quicker ways to get started using an AWS service API with the AWS CLI. Leveraging generative AI and existing documentation, developers can now more easily create working scripts for their own resources, saving time and reducing errors when managing AWS resources through the AWS CLI. Each script includes a tutorial that explains how the script works with the AWS service API to create, interact with, and clean up resources. The project also includes instructions that you can use to generate and contribute new scripts. You can use existing content and examples with generative AI tools such as the Amazon Q Developer CLI to generate a working script through an iterative test-and-improve process. Depending on how well-documented the use case is, this process can take as little as 15 minutes. For scenarios that don’t have existing examples of API calls with input and output, it can take more iterations to get a working script. Sometimes you need to provide additional information or examples from your own testing to fill in a gap. This process can actually be quite fun! To get started, see AWS Developer Tutorials. For more information on the project, see our post on Builder Center.

AWS Resource Explorer is now available in AWS Asia Pacific (Taipei) Region

Today, AWS Resource Explorer has expanded the availability of resource search and discovery to the Asia Pacific (Taipei) AWS Region.\n With AWS Resource Explorer you can search for and discover your AWS resources across AWS Regions and accounts in your organization, either using the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console. For more information about the AWS Regions where AWS Resource Explorer is available, see the AWS Region table. To turn on AWS Resource Explorer, visit the AWS Resource Explorer console. Read about getting started in our AWS Resource Explorer documentation, or explore the AWS Resource Explorer product page.

Split Cost Allocation Data for Amazon EKS supports NVIDIA & AMD GPU, Trainium, and Inferentia-powered EC2 instances

Starting today, Split Cost Allocation Data now adds support for accelerated-computing workloads running in the Amazon Elastic Kubernetes Service (EKS). The new feature in Split Cost Allocation Data for EKS allows customers to track the costs associated with accelerator-powered (Trainium, Inferentia, NVIDIA and AMD GPUs) container-level resources within their EKS clusters, in addition to the costs for CPU and Memory. This cost data is available in the AWS Cost and Usage Report, including CUR 2.0.\n With this new capability, customers get greater visibility over their AI/ML cloud infrastructure expenses. Customers can now allocate application costs to individual business units and teams based on the CPU, memory and accelerator resource reservations of their containerized accelerated-computing workloads. New Split Cost Allocation Data customers can enable this feature in the AWS Billing and Cost Management console. This feature is automatically enabled for existing Split Cost Allocation Data customers. You can use the Containers Cost Allocation dashboard to visualize the costs in Amazon QuickSight and the CUR query library to query the costs using Amazon Athena.

This feature is available in all AWS Regions where Split Cost Allocation Data for Amazon EKS is available. To get started, visit Understanding Split Cost Allocation Data and Improve cost visibility of Machine Learning workloads on Amazon EKS with AWS Split Cost Allocation Data.

Amazon Neptune Now Integrated with Zep to Power Long-Term Memory for GenAI Applications

Today, we’re announcing the integration of Amazon Neptune with Zep, an open-source memory server for LLM applications. Zep enables developers to persist, retrieve, and enrich user interaction history, providing long-term memory and context for AI agents. With this launch, customers can now use Neptune Database or Neptune Analytics as the underlying graph store and Amazon Open Search as the text-search store for Zep’s memory system, enabling graph-powered memory retrieval and reasoning.\n This integration makes it easier to build LLM agents with long-term memory, context, and reasoning. Zep users can now store and query memory graphs at scale, unlocking multi-hop reasoning and hybrid retrieval across graph, vector, and keyword modalities. By combining Zep’s memory orchestration with Neptune’s graph-native knowledge representation, developers can build more personalized, context-aware, and intelligent LLM applications. Zep helps applications remember user interactions, extract structured knowledge, and reason across memory—making it easier to build LLM agents that improve over time. To learn more about the Neptune–Zep integration, check the sample Notebook.

AWS Transform assessments now includes detached storage

AWS Transform expands its assessment capability with a new storage analysis feature, helping customers analyze their on-premises detached storage infrastructure and determine the Total Cost of Ownership (TCO) for migrating to AWS.\n AWS Transform assessment now evaluates customers’ existing storage infrastructure, including Storage Area Network (SAN), Network Attached Storage (NAS), file servers, object storage, and virtual environments, providing detailed migration recommendations to AWS services such as Amazon S3 for object storage, Amazon EBS for block storage, and Amazon FSx for specialized file systems. The assessment delivers a comprehensive TCO comparison between current and AWS environments, along with performance and cost optimization recommendations for compute and storage workloads. With storage accounting for up to 45% of total migration opportunities, AWS Transform assessment helps customers visualize the benefits of different AWS migration options.

AWS Transform assessments is available in the following AWS Regions: US East (N. Virginia) and Europe (Frankfurt). To get started, visit the AWS Transform web experience.

AWS Deadline Cloud Now Supports Automating Job Attachments Downloads

AWS Deadline Cloud now supports automating job attachments output downloads with new functionality available in the Deadline Cloud client. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated graphics and visual effects, for films, television and broadcasting, web content, and design.\n The Deadline Cloud job attachments feature makes it easy to synchronize assets from the workstation where you make them to the workers in your render farm by uploading them to Amazon S3. It also automatically identifies any output files generated during task execution and stores them back to S3. The new Deadline Cloud client command downloads all outputs for completed jobs from a provided queue, and can be invoked on a schedule, using tools like cron or task scheduler, to automatically download the outputs from all jobs as they complete. The outputs are downloaded to the local paths specified during job creation for final review.

For more information, please visit the Deadline Cloud product page and our AWS Deadline Cloud documentation.

AWS Transform for VMware supports flexible network management and broader AWS Region coverage

AWS Transform for VMware now helps you maintain business continuity during migrations by supporting Virtual Private Cloud (VPC) Classless Inter-Domain Routing (CIDR) range modifications, allowing you to run workloads in both on-premises and AWS environments without IP conflicts. When adjusting VPC CIDRs, AWS Transform automatically updates all associated resources including subnets, security groups, routing tables, and target instances. You have flexible options for IP address management: maintain your source IP addresses on new target instances, use adjusted IP addresses that align with new VPC CIDRs, or opt for IP address assignment using Dynamic Host Configuration Protocol (DHCP).\n AWS Transform for VMware is an agentic AI service that accelerates migration and modernization of VMware workloads. The service automates everything from discovery and wave planning to network configurations and server migrations, enabling organizations to modernize their infrastructure with unprecedented speed and confidence.

AWS Transform for VMware has expanded its regional coverage with new migration target regions: US East (Ohio), Europe (Stockholm), and Europe (Ireland). Access the supported migration target region list for the most up-to-date availability information.

To learn more, visit the AWS Transform for VMware product page, read the user guide, or get started in the AWS Transform web experience.

Simplified Cache Management for Anthropic’s Claude models in Amazon Bedrock

Amazon Bedrock has updated prompt caching for Anthropic’s Claude models to improve ease of use for Claude 3.5 Haiku, Claude 3.7, and Claude 4 models.\n Previously, developers needed to manage cache points manually and keep track of which cached segments should be reused. With simplified cache management, you only need to set a cache breakpoint at the end of your request. The system automatically reads from the longest previously cached prefix, which eliminates the need to manually specify which segments to reuse and reduces the effort required to manage cache logic.

By automatically identifying and applying the right cached content, simplified cache management not only helps reduce manual effort, but also helps free up more tokens since cache read tokens are not counted toward your token per minute (TPM) quotas. This can make it easier to build multi-turn workflows and research assistants, while improving both performance and cost efficiency.

Simplified cache management is available today in all regions where Anthropic Claude 3.5 Haiku, Claude 3.7, and Claude 4 models are offered on Amazon Bedrock. To get started, review the Amazon Bedrock Developer Guide and enable caching in your model invocations.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Cloud Financial Management

AWS Big Data Blog

Containers

AWS Database Blog

AWS for Industries

Artificial Intelligence

Networking & Content Delivery

AWS Quantum Technologies Blog

AWS Storage Blog

Open Source Project

AWS CLI

AWS CDK

Bottlerocket OS