10/30/2025, 12:00:00 AM ~ 10/31/2025, 12:00:00 AM (UTC)
Recent Announcements
TwelveLabs’ Pegasus 1.2 model now available in three additional AWS regions
Amazon announces the expansion of the TwelveLabs’ Pegasus 1.2 video understanding model to the US East (Ohio), US West (N. California), and Europe (Frankfurt) AWS Regions. This expansion makes it easier for customers to build and scale generative AI applications that can understand and interact with video content at an enterprise level.\n Pegasus 1.2 is a powerful video-first language model that can generate text based on the visual, audio, and textual content within videos. Specifically designed for long-form video, it excels at video-to-text generation and temporal understanding. With Pegasus 1.2’s availability in these additional regions, you can now build video-intelligence applications closer to your data and end users in key geographic locations, reducing latency and simplifying your architecture. With today’s expansion, Pegasus 1.2 is now available in Amazon Bedrock across 7 regions: US East (N. Virginia), US West (Oregon), US East (Ohio), US West (N. California), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Seoul). To get started with Pegasus 1.2, visit the Amazon Bedrock console. To learn more, read the blog, product page, Amazon Bedrock pricing, and documentation.
Amazon ECS now supports built-in Linear and Canary deployments
Amazon Elastic Container Service (Amazon ECS) announces support for linear and canary deployment strategies, giving you more flexibility and control when deploying containerized applications. These new strategies complement ECS built-in blue/green deployments, enabling you to choose the traffic shifting approach that best matches your application’s risk profile and validation requirements.\n With linear deployments, you can gradually shift traffic from your current service revision to the new revision in equal percentage increments over a specified time period. You configure the step percentage (for example, 10%) to control how much traffic shifts at each increment, and set a step bake time to wait between each traffic shift for monitoring and validation. This allows you to validate your new application version at multiple stages with increasing amounts of production traffic. With canary deployments, you can route a small percentage of production traffic to your new service revision while the majority of traffic remains on the current stable version. You set a canary bake time to monitor the new revision’s performance, after which Amazon ECS shifts the remaining traffic to the new revision. Both strategies support a deployment bake time that waits after all production traffic has shifted to the new revision before terminating the old revision, enabling quick rollback without downtime if issues are detected. You can configure deployment lifecycle hooks to perform custom validation steps, and use Amazon CloudWatch alarms to automatically detect failures and trigger rollbacks.
The feature is available in all commercial AWS Regions where Amazon ECS is available. You can use linear and canary deployment strategies for new and existing Amazon ECS services that use Application Load Balancer (ALB) or ECS Service Connect, using the Console, SDK, CLI, CloudFormation, CDK, and Terraform. To learn more, see our documentation on Amazon ECS linear deployments and Amazon ECS canary deployments.
Amazon S3 Access Grants are now available in additional AWS Regions
You can now create Amazon S3 Access Grants in the AWS Asia Pacific (Thailand) and AWS Mexico (Central) Regions.\n Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity. Visit the AWS Region Table for complete regional availability information. To learn more about Amazon S3 Access Grants, visit our product page.
Split Cost Allocation Data for Amazon EKS supports Kubernetes labels
Starting today, Split Cost Allocation Data for Amazon EKS now allows you to import up to 50 Kubernetes custom labels per pod as cost allocation tags. You can attribute costs of your Amazon EKS cluster at the pod level using custom attributes, such as cost center, application, business unit, and environment in AWS Cost and Usage Report (CUR).\n With this new capability, you can better align your cost allocation with specific business requirements and organizational structure driven by your cloud financial management needs. This enables granular cost visibility of your EKS clusters running multiple application containers using shared EC2 instances, allowing you to allocate the shared costs of your EKS cluster. For new split cost allocation data customers, you can enable this feature in the AWS Billing and Cost Management console. For existing customers, EKS will automatically import the labels, but you must activate them as cost allocation tags. After activation, Kubernetes custom labels are available in your CUR within 24 hours. You can use the Containers Cost Allocation dashboard to visualize the costs in Amazon QuickSight and the CUR query library to query the costs using Amazon Athena. This feature is available in all AWS Regions where Split Cost Allocation Data for Amazon EKS is available. To get started, visit Understanding Split Cost Allocation Data.
AWS Elastic Beanstalk adds support for Amazon Corretto 25
AWS Elastic Beanstalk now enables customers to build and deploy Java applications using Amazon Corretto 25 on Amazon Linux 2023 (AL2023) platform. This latest platform support allows developers to leverage the newest Java 25 features while benefiting from AL2023’s enhanced security and performance capabilities.\n AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Corretto 25 on AL2023 allows developers to take advantage of the latest Java language features including compact object headers, ahead-of-time (AOT) caching, and structured concurrency. Developers can create Elastic Beanstalk environments running Corretto 25 through the Elastic Beanstalk Console, CLI, or API. This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions. For more information about Corretto 25 and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.
AWS Clean Rooms launches advanced configurations to optimize SQL performance
Today, AWS Clean Rooms announces support for advanced configurations to improve the performance of Spark SQL queries. This launch enables you to customize Spark properties and compute sizes for SQL queries at runtime, offering increased flexibility to meet your performance, scale, and cost requirements. \n With AWS Clean Rooms, you can configure Spark properties—such as shuffle partition settings for parallel processing and autoBroadcastJoinThreshold for optimizing join operations—to help you better control the behavior and tuning of SQL queries in a Clean Rooms collaboration. Additionally, you can choose to cache an existing table’s data containing results from a SQL query or create and cache a new table, which help improve the performance and reduce costs for complex queries using large datasets. For example, an advertiser running lift analysis on their advertising campaigns can specify a custom number of workers for an instance type and configure Spark properties—without editing their SQL query—to optimize costs. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
Introducing the Capacity Reservation Topology API for AI, ML, and HPC instance types
AWS announces the general availability of the Amazon Elastic Compute Cloud (EC2) Capacity Reservation Topology API. It joins the Instance Topology API in enabling customers to efficiently manage capacity, schedule jobs, and rank nodes for Artificial Intelligence, Machine Learning, and High-Performance Computing distributed workloads. The Capacity Reservation Topology API gives customers a unique per-account hierarchical view of the relative location of their capacity reservations.\n Customers running distributed parallel workloads are managing thousands of instances across tens to hundreds of capacity reservations. With the Capacity Reservation Topology API, customers can describe the topology of their reservations as a network node set, which will show the relative proximity of their capacity without the need to launch an instance. This enables efficient capacity planning and management as customers provision workloads on tightly coupled capacity. Customers can then use the Instance Topology API, which provides consistent network nodes from the Capacity Reservation Topology API with further granularity, enabling a consistent and seamless way to schedule jobs and rank nodes for optimal performance in distributed parallel workloads.
The Capacity Reservation Topology API is available in the following AWS regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Middle East (Bahrain), Middle East (UAE), and South America (São Paulo), and it is supported on all instances available with the Instance Topology API.
To learn more, please visit the latest EC2 user guide.
AWS Step Functions announces a new metrics dashboard
AWS Step Functions announces improved observability with a new metrics dashboard, giving you visibility into your workflow operations at both the account and state machine levels. AWS Step Functions is a visual workflow service capable of orchestrating over 14,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads.\n With this launch, you can now view usage and billing metrics in one dashboard on the AWS Step Functions console. Metrics are available at both account and state-machine level. You can now view these metrics for both standard and express workflows. In addition, existing metrics, such as ApproximateOpenMapRunCount, are available on the metrics dashboard. New dashboard and metrics are available in all AWS Regions where AWS Step Functions is available. To get started, open a dashboard today in the AWS Step Functions console. To learn more, visit the Step Functions developer guide.
Amazon GameLift Servers adds telemetry metrics to all server SDKs and game engine plugins
Today, Amazon GameLift Servers launched the addition of built-in telemetry metrics across all server SDKs and game engine plugins. Built on OpenTelemetry, an open source framework, Amazon GameLift Servers telemetry metrics enable game developers to generate, collect, and export critical client-side metrics for game-specific insights.\n With this release, Amazon GameLift Servers can now be configured to collect and publish telemetry metrics for game servers running on managed Amazon EC2 and container fleets. Customers can leverage both pre-defined metrics and custom metrics, publishing them to Amazon Managed Service for Prometheus or Amazon CloudWatch. This data can be visualized through ready-to-use dashboards (via Amazon Managed Grafana or Amazon CloudWatch) to help game developers optimize resource utilization, improve player experience, and identify and resolve potential operational issues. Telemetry metrics are now available in all Amazon GameLift Servers supported regions, except AWS China. For more information on monitoring resources using telemetry metrics on Amazon GameLift Servers, please visit the Amazon GameLift Servers documentation.
AWS Serverless MCP Server now supports tools for AWS Lambda event source mappings (ESM)
The AWS Serverless Model Context Protocol (MCP) Server now supports specialized tools for AWS Lambda event source mappings (ESM), helping developers configure and manage ESMs more efficiently. These new tools combine the power of AI assistance with Lambda ESM expertise to streamline how developers set up, optimize, and troubleshoot event-driven serverless applications built on Lambda.\n We previously launched the open-source Serverless MCP Server to enhance how developers build modern applications with AI-powered contextual guidance for architecture decisions, infrastructure provisioning, deployment automation, and troubleshooting of serverless applications. Starting today, we’re expanding the MCP server’s capabilities with new ESM tools that empower AI assistants, like Amazon Q Developer and Kiro, with proven knowledge of ESM patterns and best practices. The new ESM tools translate high-level throughput, latency, and reliability requirements into specific ESM configurations, generate complete AWS Serverless Application Model (AWS SAM) templates with optimized settings, validate network topology for Amazon Virtual Private Cloud (VPC)-based event sources, and diagnose common ESM issues. Thus, these tools enhance the event-driven application development experience, guiding developers through the entire ESM lifecycle, from initial setup to optimization and troubleshooting. The key new ESM tools being added to the Serverless MCP Server are: the ESM guidance tool for contextual guidance across all supported event sources, the ESM optimization tool for analyzing configuration tradeoffs, and the ESM Kafka troubleshooting tool for specialized diagnostics with Amazon Managed Streaming for Apache Kafka (Amazon MSK) and self-managed Apache Kafka clusters. To learn more about the Serverless MCP Server and how it can transform your AI-assisted application development, visit the launch blog post and documentation. To download and try out the open-source MCP server with your AI-enabled IDE of choice, visit the GitHub repository.
AWS Cloud Map supports cross-account workloads in AWS GovCloud (US) Regions
AWS Cloud Map now supports cross-account service discovery through integration with AWS Resource Access Manager (AWS RAM) in AWS GovCloud (US) Regions. This enhancement lets you seamlessly manage and discover cloud resources—such as Amazon ECS tasks, Amazon EC2 instances, and Amazon DynamoDB tables—across AWS accounts. By sharing your AWS Cloud Map namespace via AWS RAM, workloads in other accounts can discover and manage resources registered in that namespace. This enhancement simplifies resource sharing, reduces duplication, and promotes consistent service discovery across environments for organizations with multi-account architectures.\n You can now share your AWS Cloud Map namespaces using AWS RAM with individual AWS accounts, specific Organizational Units (OUs), or your entire AWS Organization. To get started, create a resource share in AWS RAM, add the namespaces you want to share, and specify the principals (accounts, OUs, or the organization) that should have access. This enables platform engineers to maintain a centralized service registry—or a small set of registries—and share them across multiple accounts, simplifying service discovery. Application developers can then build services that rely on a consistent, shared registry without worrying about availability or synchronization across accounts. AWS Cloud Map’s cross-account service discovery support improves operational efficiency and makes it easier to scale service discovery as your organization grows by reducing duplication and streamlining access to namespaces. This feature is available now in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions via the AWS Management Console, API, SDK, CLI, and CloudFormation. To learn more, please refer to the AWS Cloud Map documentation.
AWS Backup adds single-action database snapshot copy across AWS Regions and accounts
AWS Backup now supports copying database snapshots across AWS Regions and accounts using a single copy action. This feature supports Amazon RDS, Amazon Aurora, Amazon Neptune, and Amazon DocumentDB snapshots. It eliminates the need for sequential copying steps.\n You can use cross-Region and cross-account snapshot copies to protect against incidents like ransomware attacks and Region outages that might affect your production accounts or primary Regions. Previously, you needed to perform this as a two-step process—first copying to a different Region, and then to a different account (or vice versa). Now, by completing this in one step, you can achieve faster recovery point objectives (RPOs) while eliminating costs associated with intermediate copies. This streamlined process also simplifies the workflow by removing the need for custom scripts or Lambda functions that monitor intermediate copy status.
This feature is available for all Amazon RDS and Amazon Aurora engines, Amazon Neptune and Amazon DocumentDB, in all regions where AWS Backup supports cross-Region and cross-account copying of snapshots in separate steps. You can start using this feature today through the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. To get started, refer the AWS Backup documentation.
Announcing an AI agent context pack for AWS IoT Greengrass developers
AWS announces the release of a new AI agent context package for accelerating edge device application development using AWS IoT Greengrass. AWS IoT Greengrass is an IoT edge runtime and cloud service that helps developers build, deploy, and manage device software at the edge. The context package includes ready-to-use instructions, examples, and templates - enabling developers to leverage generative AI tools and agents for faster software creation, testing and deployment.\n Available as an open-source GitHub repository under the Creative Commons Attribution Share Alike 4.0 license, the AWS IoT Greengrass AI agent context package helps streamline development workflows. Developers can boost productivity by cloning the repository and integrating it with modern generative AI tools like Amazon Q to help accelerate cloud-connected edge application development while simplifying fleet-wide deployment and management. This new capability is available in all AWS Regions where AWS IoT Greengrass is supported. To learn more about AWS IoT Greengrass and its new AI agent context pack, visit the AWS IoT Greengrass documentation. Follow the getting started guide for a quick introduction to AWS IoT Greengrass.
Introducing the Amazon OCSF Ready Specialization
We are excited to announce the Amazon OCSF Ready Specialization that recognizes AWS Partners who have technically validated their software solutions to integrate with OCSF-compatible Amazon services with proven customer success in production environments. The Open Cybersecurity Schema Framework (OCSF) is an open-source initiative that simplifies how security data is normalized and shared across your security tools. This validation ensures customers can confidently select solutions that will help them improve their security operations through standardized data formats, leading to efficient threat detection, vulnerability identification, and enhanced security analytics.\n The AWS Service Ready Program provides customers with AWS Partner software solutions that work with AWS Services. This specialization helps you quickly find and deploy pre-validated AWS Partner solutions that work seamlessly with OCSF-compatible Amazon services, reducing the complexity of your security operations. Partners can participate in the Amazon OCSF Ready designation by either sending logs and security events in the OCSF schema, or receiving logs or security events from OCSF-compatible Amazon services. This standardization helps customers to collect, combine, and analyze security data reducing the time and effort needed for security operations. Amazon OCSF Ready Partners receive AWS Specialization Program benefits, and have access to signature benefits, including private strategy sessions and AWS guest speaker support for virtual events. The Amazon OCSF specialization expands and replaces the Amazon Security Lake Specialization. To learn more about how to become an Amazon OCSF Ready Partner, visit the AWS Service Ready Program webpage.
Amazon Managed Service for Prometheus adds anomaly detection
Amazon Managed Service for Prometheus, a fully managed Prometheus-compatible monitoring service now supports anomaly detection. Anomaly detection applies machine-learning algorithms to continuously analyze time series and surfaces anomalies with minimal user intervention. You can use anomaly detection to isolate and troubleshoot unexpected changes in your metric behavior.\n Amazon Managed Service for Prometheus Anomaly Detection currently supports Random Cut Forest (RCF), an unsupervised algorithm for detecting anomalous data points within a time series. Once you create and configure an anomaly detector in an Amazon Managed Service for Prometheus workspace, it will create four new time series to represent resulting anomalies and confidence values along with them. Based on the resulting time series, you can create dynamic alerting rules in the Amazon Managed Service for Prometheus Alert manager, to notify you when anomalies occur, and you can also visualize the resulting time series alongside the input time series either in self-managed Grafana or Amazon Managed Grafana dashboards.
This feature is now available in all AWS regions where Amazon Managed Service for Prometheus is generally available. To configure anomaly detection use the AWS CLI, SDK, or APIs. Check out the Amazon Managed Service for Prometheus user guide for detailed documentation.
Amazon ECS Service Connect enhances observability with Envoy Access Logs
Amazon Elastic Container Service (Amazon ECS) Service Connect now supports Envoy access logs, providing deeper observability into request-level traffic patterns and service interactions. This new capability captures detailed per-request telemetry for end-to-end tracing, debugging, and compliance monitoring.\n Amazon ECS Service Connect makes it simple to build secure, resilient service-to-service communication across clusters, VPCs, and AWS accounts. It integrates service discovery and service mesh capabilities by automatically injecting AWS-managed Envoy proxies as sidecars that handle traffic routing, load balancing, and inter-service connectivity. Envoy Access logs capture detailed traffic metadata enabling request-level visibility into service communication patterns. This enables you to perform network diagnostics, troubleshoot issues efficiently, and maintain audit trails for compliance requirements. You can now configure access logs within ECS Service Connect by updating the ServiceConnectConfiguration to enable access logging. Query strings are redacted by default to protect sensitive data. Envoy access logs will output to the standard output (STDOUT) stream alongside application logs and flow through the existing ECS log pipeline without requiring additional infrastructure. This configuration supports all existing application protocols (HTTP, HTTP2, GRPC and TCP). This feature is available in all regions where Amazon ECS Service Connect is supported. To learn more, visit the Amazon ECS Developer Guide.
Amazon Bedrock AgentCore Browser now reduces CAPTCHAs with Web Bot Auth (Preview)
Amazon Bedrock AgentCore Browser provides a fast, secure, cloud-based browser for AI agents to interact with websites at scale. It now enables agents to establish trusted, accountable access quickly and reduce CAPTCHA interruptions in automated workflows through Web Bot Auth, a draft IETF protocol that cryptographically identifies AI agents to websites. Traditional security measures like CAPTCHAs, rate limits, and blocks often halt automated workflows because Web Application Firewalls (WAFs) treat all automated traffic as suspicious - meaning AI agents frequently need human intervention to complete their tasks.\n By enabling Web Bot Auth, AgentCore Browser streamlines bot verification across major security providers including Akamai Technologies, Cloudflare, and HUMAN Security. It automatically generates security credentials, signs HTTP requests with private keys, and registers verified identities - getting you started immediately without the need to register with multiple WAF providers or manage verification infrastructure. Web Bot Auth support for AgentCore Browser is available in preview in all nine AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). Learn more about this feature through the blog, see the Reduce CAPTCHAs with Web Bot Auth documentation to get started with Web Bot Auth in Browser. AgentCore offers consumption-based pricing with no upfront costs.
AWS Blogs
AWS Japan Blog (Japanese)
- How to use agent steering and MCP to teach Kiro new skills
- AWS Weekly Roundup: AWS RTB Fabric, AWS Customer Carbon Footprint Tool, AWS Secret-West Region, etc. (2025/10/27)
- Introducing AWS RTB Fabric for real-time ad technology workloads
- Steps for a successful launch on Amazon GameLift Servers: Launch Phase
- Understand resource allocation and performance analysis using AWS DMS extended monitoring
- AWS DMS Implementation Guide: Building Resilient Database Migrations with Testing, Monitoring, and SOPs
- Cyber Resilience: Implementing Cyber Event Recovery (Financial Reference Architecture Japan 2025)