5/19/2025, 12:00:00 AM ~ 5/20/2025, 12:00:00 AM (UTC)

Recent Announcements

Announcing Amazon Bedrock Agents Metrics in CloudWatch

Amazon Bedrock now offers comprehensive CloudWatch metrics support for Agents, enabling developers to monitor, troubleshoot, and optimize their agent-based applications with greater visibility. This new capability provides detailed runtime metrics for both InvokeAgent and InvokeInlineAgent operations, including invocation counts, latency measurements, token usage, and error rates, helping customers better understand their agents’ performance in production environments.\n With CloudWatch metrics integration, developers can track critical performance indicators such as total processing time, time-to-first-token (TTFT), model latency, and token counts across different dimensions including operation type, model ID, and agent alias ARN. These metrics enable customers to identify bottlenecks, detect anomalies, and make data-driven decisions to improve their agents’ efficiency and reliability. Customers can also set up CloudWatch alarms to receive notifications when metrics exceed specified thresholds, allowing for proactive management of their agent deployments.

CloudWatch metrics for Amazon Bedrock Agents is now available in all AWS Regions where Amazon Bedrock is supported. To get started with monitoring your agents, ensure your IAM service role has the appropriate CloudWatch permissions. For more information about this feature and implementation details, visit the Amazon Bedrock documentation or refer to the CloudWatch User Guide for comprehensive monitoring best practices.

AWS CodeBuild adds support for new IAM condition keys

AWS CodeBuild now supports new IAM condition keys enabling granular access control on CodeBuild’s resource-modifying APIs. The new condition keys cover most of CodeBuild’s API request contexts, including network settings, credential configurations and compute restrictions. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.\n The new condition keys allow you to create IAM policies that better enforce your organizational policies on CodeBuild resources such as projects and fleets. For example, you can use codebuild:vpcConfig.vpcId condition keys to enforce the VPC connectivity settings on projects or fleets, codebuild:source.buildspec condition keys to prevent unauthorized modifications to project buildspec commands, and codebuild:computeConfiguration.instanceType condition keys to restrict which compute types your builds can use. The new IAM condition keys are available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. For a full list of new CodeBuild IAM condition keys, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.

DynamoDB local is now accessible on AWS CloudShell

Today, Amazon DynamoDB announces the general availability of DynamoDB local on AWS CloudShell, a browser-based, pre-authenticated shell that you can launch directly from the AWS Management Console. With DynamoDB local, you can develop and test your applications by running DynamoDB in your local development environment without incurring any costs.\n DynamoDB local works with your existing DynamoDB API calls without impacting your production environment. You can now start DynamoDB local just by using dynamodb-local alias in CloudShell to develop and test your DynamoDB tables anywhere in the console without downloading or installing the AWS CLI nor DynamoDB local. To interact with DynamoDB local running in CloudShell with CLI commands, use the –endpoint-url parameter and point it to localhost:8000. You can navigate to CloudShell from the AWS Management Console a few different ways. For more information, see Getting started with AWS CloudShell. To learn more about using DynamoDB local command line options see DynamoDB local usage notes.

AWS CloudWatch Synthetics adds safe canary updates and automatic retries

Today, CloudWatch Synthetics, which allows monitoring of customer workflows on websites through periodically running custom code scripts, announces two new features: canary safe updates and automatic retries for failing canaries. The former allows you to test updates for your existing canaries before applying changes and the latter enables canaries to automatically attempt additional retries when a scheduled run fails, helping to differentiate between genuine and intermittent failures.\n Canary safe updates helps minimize potential monitoring disruptions caused by erroneous updates. By doing a dry run you can verify canary compatibility with newly released runtimes, or with any configuration or code changes. It minimizes potential monitoring gaps by maintaining continuous monitoring during update processes and mitigates risk to end user experience in the process of keeping canaries up-to-date. The automatic retries feature helps in reducing false alarms. When enabled, it provides more reliable monitoring results by distinguishing between persistent issues and intermittent failures preventing unnecessary disruption. Users can analyze temporary failures using the canary runs graph, which employs color-coded points to represent scheduled runs and their retries. You can start using these features by accessing CloudWatch Synthetics through the AWS Management Console, AWS CLI, or CloudFormation. Dry runs for safe canary updates and automatic retries are are priced the same as regular canary runs and are available in all commercial AWS Regions. To learn more about safe canary updates and automatic retries visit the linked Amazon CloudWatch Synthetics documentation. Or get started with Synthetics monitoring by visiting the user guide.

Amazon MSK adds support for Apache Kafka version 4.0

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 4.0, bringing the latest advancements in cluster management and performance to MSK Provisioned. Kafka 4.0 introduces a new consumer rebalance protocol, now generally available, that helps ensure smoother and faster group rebalances. In addition, Kafka 4.0 requires brokers and tools to use Java 17, providing improved security and performance, includes various bug fixes and improvements, and deprecates metadata management via Apache ZooKeeper.\n To start using Apache Kafka 4.0 on Amazon MSK, simply select version 4.0.x when creating a new cluster via the AWS Management Console, AWS CLI, or AWS SDKs. You can also upgrade existing MSK provisioned clusters with an in-place rolling update. Amazon MSK orchestrates broker restarts to maintain availability and protect your data during the upgrade. Kafka version 4.0 support is available today across all AWS regions where Amazon MSK is offered. For more details, see the Amazon MSK Developer Guide and the Apache Kafka release notes for version 4.0.

Amazon MSK is now available in Asia Pacific (Thailand) and Mexico (Central) Regions

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now available in Asia Pacific (Thailand) and Mexico (Central) regions. Customers can create Amazon MSK Provisioned clusters in these regions starting today.\n Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to more quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you spend more time building innovative streaming applications and less time managing Kafka clusters. Visit the AWS Regions page for all the regions where Amazon MSK is available. To get started, see the Amazon MSK Developer Guide.

Amazon Bedrock Data Automation now supports generating custom insights from videos

Amazon Bedrock Data Automation (BDA) now supports video blueprints so you can generate tailored, accurate insights in a consistent format for your multimedia analysis applications. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With video blueprints, you can customize insights — such as scene summaries, content tags, and object detection — by specifying what to generate, the output data type, and the natural language instructions to guide generation.\n You can create a new video blueprint in minutes or select from a catalog of pre-built blueprints designed for use cases such as media search or highlight generation. With your blueprint, you can generate insights from a variety of video media including movies, television shows, advertisements, meetings recordings, and user-generated videos. For example, a customer analyzing a reality television episode for contextual ad placement can use a blueprint to summarize a scene where contestants are cooking, detect objects like ‘tomato’ and ‘spaghetti’, and identify the logos of condiments used for cooking. As part of the release, BDA also enhances logo detection and the Interactive Advertising Bureau (IAB) taxonomy in standard output. Video blueprints are available in all AWS Regions where Amazon Bedrock Data Automation is supported. To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with using video blueprints, visit the Amazon Bedrock console.

Amazon Inspector enhances container security by mapping ECR images to running containers

Amazon Inspector now automatically maps your Amazon Elastic Container Registry (Amazon ECR) images to specific tasks running on Amazon Elastic Container Service (Amazon ECS) or pods running on Amazon Elastic Kubernetes Service (Amazon EKS), helping identify where the images are actively in use. This enables you to focus your limited resources on patching most critical vulnerable images that are associated with running workloads, improving security and mean- time to remediation.\n With this launch, you can use Amazon Inspector console or APIs to identify your actively used container images, when you last used an image, and which clusters are running the image. This information will be included in your findings and resource coverage details, and will be routed to EventBridge. You can also control how long an image is monitored by Inspector after its ‘last in use’ date by updating the ECR re-scan duration using the console or APIs. This is in addition to the existing push and pull date settings. Your Amazon ECR images with continuous scanning enabled on Amazon Inspector will automatically get this updated data within your Amazon Inspector findings.  

Amazon Inspector is a vulnerability management service that continually scans AWS workloads including Amazon EC2 instances, container images, and AWS Lambda functions for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS organization. This feature is available at no additional cost to Amazon Inspector customers scanning thier container images in Amazon Elastic Container Registry (ECR). Feature is available in all commercial and AWS GovCloud (US) Regions where Amazon Inspector is available.

Getting started with Amazon Inspector

Amazon Inspector free trial

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Architecture Blog

AWS Database Blog

AWS DevOps & Developer Productivity Blog

AWS for Industries

AWS Machine Learning Blog

AWS Messaging & Targeting Blog

AWS Storage Blog

Open Source Project

AWS CLI

AWS CDK