8/22/2025, 12:00:00 AM ~ 8/25/2025, 12:00:00 AM (UTC)

Recent Announcements

Amazon EKS enables namespace configuration for AWS and Community add-ons

Amazon Elastic Kubernetes Service (Amazon EKS) now supports Kubernetes namespace configuration for AWS and Community add-ons, providing you greater control over how add-ons are organized within your Kubernetes cluster.\n With namespace configuration, you can now specify a custom namespace during add-on installation, enabling better organization and isolation of add-on objects within your EKS cluster. This flexibility helps you align add-ons with your operational needs and existing namespace strategy. Once an add-on is installed in a specific namespace, you must remove and recreate the add-on to change its namespace. This feature is available through the AWS Management Console, Amazon EKS APIs, AWS Command Line Interface (CLI), and infrastructure as code tools like AWS CloudFormation. Namespace configuration for AWS and Community add-ons is now available in all commercial AWS Regions. To learn more, visit the Amazon EKS documentation.

Amazon RDS for PostgreSQL now supports delayed read replicas

Amazon RDS for PostgreSQL now supports delayed read replicas, allowing you to specify a minimum time period that a replica database lags behind a source database. This feature creates a time buffer that helps protect against data loss from human errors such as accidental table drops or unintended data modifications.\n In disaster recovery scenarios, you can pause replication before problematic changes are applied, resume replication up to a specific log position, and promote the replica as your new primary database. This approach enables faster recovery compared to traditional point-in-time restore operations, which can take hours for large databases. This feature is available in all AWS Regions where RDS for PostgreSQL is offered, including the AWS GovCloud (US) Regions, at no additional cost beyond standard RDS pricing. To learn more, visit the Amazon RDS for PostgreSQL documentation.

Amazon EC2 R7g instances now available in Africa (Cape Town)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7g instances are available in the AWS Africa (Cape Town) region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.\n Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS). To learn more, see Amazon EC2 R7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Amazon RDS for Db2 now supports read replicas

Amazon Relational Database Service (RDS) for DB2 now supports read replicas. Customers can add up to three read replicas for their database instance, and use the replicas to support read-only applications without overloading the primary database instance.\n Customers can setup replicas in the same region or in a different region from the primary database instance. When a read replica is setup, RDS replicates changes asynchronously to the read replicas. Customers can run their read-only queries against the read replica without impacting performance of the primary database instance. Customers can also use read replicas for disaster recovery procedures by promoting a read replica to support both read and write operations. Read replicas require IBM Db2 licenses for all vCPUs on replica instances. Customers can obtain On-Demand Db2 licenses from the AWS Marketplace, or use Bring Your Own License (BYOL). To learn more, refer to Amazon RDS for Db2 documentation and pricing pages.

Announcing the AWS Billing and Cost Management MCP server

Today, AWS announced the release of a model context protocol (MCP) server for Billing and Cost Management, now available in the AWS Labs GitHub repository. The Billing and Cost Management MCP server allows customers to analyze their historical spending, find cost optimization opportunities, and estimate the costs of new workloads using the AI agent or assistant of their choice.\n Artificial intelligence is transforming the way that customers manage FinOps practices. While customers can access AI-powered cost analysis and optimization capabilities in Amazon Q Developer in the console, the Billing and Cost Management MCP server brings these capabilities to any MCP-compatible AI assistant or agent that customers may be using, such as Q Developer CLI tool, the Kiro IDE, Visual Studio Code, or Claude Desktop. This MCP server gives these clients rich capabilities to analyze historical and forecasted cost and usage data, identify cost optimization opportunities, understand AWS service pricing, find cost anomalies, and more. The MCP server not only provides access to AWS service APIs; it also provides a dedicated SQL-based calculation engine allowing AI assistants to perform reliable, reproducible calculations — ranging from period-over-period changes to unit cost metrics — and easily handle large volumes of cost and usage data. You can download and integrate the open-source server with your preferred MCP-compatible AI assistant. The server connects securely to the AWS Billing and Cost Management services using standard AWS credentials with minimal configuration required. To get started, visit the AWS Labs GitHub repository.

Count Tokens API supported for Anthropic’s Claude models now in Amazon Bedrock

The Count Tokens API is now available in Amazon Bedrock, enabling you to determine the token count for a given prompt or input being sent to a specific model ID prior to performing any inference.\n By surfacing a prompt’s token count, the Count Tokens API allows you to more accurately project your costs, and provides you with greater transparency and control over your AI model usage. It allows you to proactively manage your token limits on Amazon Bedrock, helping to optimize your usage and avoid unexpected throttling. It also helps ensure your workloads fit within a model’s context length limit, allowing for more efficient prompt optimization.

At launch, the Count Tokens API will support Claude models, with the functionality available in all regions where these models are supported. For more information about this new feature, including supported models and use cases, visit the Count Tokens API documentation.

Amazon SageMaker Unified Studio adds S3 file sharing options to projects

Amazon SageMaker Unified Studio now offers a simplified file storage option in projects, providing data workers with an easier way to collaborate on their analytics and machine learning workflows without depending on Git. You can now choose between Git repositories (GitHub, GitLab or Bitbucket Cloud) or Amazon Simple Storage (S3) buckets for sharing code files between the various members of a project. While S3 is the default option, customers who want to use Git can still continue to have the same experience as they currently do.\n With this launch, customers will see a consistent view of their files irrespective of the tool they are working in across SageMaker Unified Studio (such as JupyterLab, Code Editor or SQL query editor) making it easy to create, edit and share code. The S3 file storage option operates on a “last write wins” principle and supports basic file versioning when enabled by administrators. This option is particularly beneficial for data science teams who want to focus on their analytics and machine learning work without managing Git operations, while still maintaining a collaborative workspace for their project artifacts. This feature is available in all AWS Regions where Amazon SageMaker Unified Studio is available. To learn more about storage options in SageMaker Unified Studio projects, see Managing Project Files in the Amazon SageMaker Unified Studio User Guide.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Cloud Financial Management

AWS Big Data Blog

AWS Database Blog

Artificial Intelligence

AWS for M&E Blog

Open Source Project

AWS CLI

Amplify UI