9/3/2025, 12:00:00 AM ~ 9/4/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon MQ now supports OAuth 2.0 plugin for RabbitMQ
Amazon MQ now supports OAuth 2.0 authentication and authorization for RabbitMQ brokers with public identity providers in both single instance and highly available Multi-AZ cluster deployments. This feature enables RabbitMQ brokers to authenticate clients and users using JWT-encoded OAuth 2.0 access tokens, providing enhanced security and flexibility in access management.\n You can configure OAuth 2.0 on your RabbitMQ broker on Amazon MQ using the AWS Console, AWS CloudFormation, AWS Command Line Interface (CLI), or the AWS Cloud Development Kit (CDK). This feature is available in all AWS regions where Amazon MQ is available. To get started, create a new RabbitMQ broker with OAuth 2.0 authentication or update your existing broker’s configuration to enable OAuth2.0 support. This feature maintains compatibility with standard RabbitMQ OAuth 2.0 implementations, ensuring seamless migration for existing OAuth 2.0 enabled brokers. For detailed configuration options and steps, refer to the Amazon MQ documentation page.
Amazon CloudWatch now supports querying metrics data up to two weeks old
Amazon CloudWatch now allows you to query metrics data up to two weeks in the past using the Metrics Insights query source. CloudWatch Metrics Insights offers fast, flexible, SQL-based queries. This new capability allows you to display, aggregate, or slice and dice metrics data older than 3 hours, for enhanced visualization and investigation.\n Customers creating dashboards and alarms to monitor dynamic groups of metrics over their resources and applications could visualize up to 3 hours of data when using Metrics Insights SQL queries. This enhancement helps customers identify trends and investigate impact for a longer period of time, even days after an event. This extended query time range helps improve the operational health of teams and ensures impacts are never missed. Querying metrics data up to two weeks old with Metrics Insights is now available in commercial AWS regions. The ability to query metrics data up to 2 weeks old is automatically available at no additional cost. Standard pricing applies for alarms, dashboards or API usage on Metrics Insights, see CloudWatch pricing for details. To learn more about metrics queries with Metrics Insights, visit the CloudWatch documentation.
Amazon CloudWatch query alarms now support monitoring metrics individually
Amazon CloudWatch now allows you to monitor multiple individual metrics via a single alarm. By dynamically including metrics to monitor via a query, this new capability eliminates the need to manually manage separate alarms for dynamic resource fleets.\n As customers rely more on autonomous teams and autoscaled resources, they face a choice between maintenance-free aggregated monitoring and the operational cost of maintaining per-resource alarming. Alarms that evaluate multiple metrics provide granular monitoring with individual actions through an alarm that automatically adjusts in real time as resources get created or deleted. This reduces operational efforts, allowing customers to focus on the value of their observability while ensuring no resources go unmonitored. Monitoring multiple metrics with a single alarm is now available in all commercial AWS regions, the AWS GovCloud (US) Regions, and the China Regions. To start alarming on multiple metrics, create an alarm on a Metrics Insights (SQL) metrics query using GROUP BY and ORDER BY conditions. The alarm automatically updates the query results with each evaluation, and matches corresponding metrics as resources change. You can configure alarms through the CloudWatch console, AWS CLI, CloudFormation, or CDK. Metrics Insights query alarms’ pricing applies, see CloudWatch pricing for details. To learn more about monitoring multiple metrics with query alarms and improving your monitoring efficiency, visit the CloudWatch alarms documentation.
AWS Direct Connect announces new location in Nairobi, Kenya
Today, AWS announced the opening of a new AWS Direct Connect location within East African Data Centres NBO1 near Nairobi, Kenya. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. This site is the first AWS Direct Connect location in Kenya. This Direct Connect location offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.\n The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. For more information on the over 145 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Amazon RDS for Oracle and Amazon RDS Custom for Oracle now support bare metal instances. You can use M7i, R7i, X2iedn, X2idn, X2iezn, M6i, M6id, M6in, R6i, R6id, and R6in bare metal instances at 25% lower price compared to equivalent virtualized instances.\n With bare metal instances, you can combine multiple databases onto a single bare metal instance to reduce cost by using the Multi-tenant feature. For example, databases running on a db.r7i.16xlarge instance and a db.r7i.8xlarge instance can be consolidated to individual pluggable databases on a single db.r7i.metal-24xl instance. Furthermore, you may be able to reduce your commercial database license and support costs by using bare metal instances since they provide full visibility into the number of CPU cores and sockets of the underlying server. Refer to Oracle Cloud Policy and Oracle Core Factor Table, and consult your licensing partner to determine if you can reduce license and support costs. Bare metal instances are available with Bring Your Own License (BYOL) license for Oracle Enterprise Edition. Refer to Amazon RDS for Oracle Pricing and Amazon RDS Custom for Oracle Pricing for available instance configurations, pricing, and region availability.
AWS Config now supports 5 new resource types
AWS Config now supports 5 additional AWS resource types. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.\n With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators. You can now use AWS Config to monitor the following newly supported resource types in all AWS Regions where the supported resources are available.
Resource Types:
AWS::CodeArtifact::Domain
AWS::Config::ConformancePack
AWS::Glue::Database
AWS::NetworkManager::TransitGatewayPeering
AWS::RolesAnywhere::TrustAnchor
Amazon Bedrock now supports Global Cross-Region inference for Anthropic Claude Sonnet 4
Anthropic’s Claude Sonnet 4 is now available with Global cross-Region inference in Amazon Bedrock, so you can now use the Global Claude Sonnet 4 inference profile to route your inference requests to any supported commercial AWS Region for processing, optimizing available resources and enabling higher model throughput.\n Amazon Bedrock is a comprehensive, secure, and flexible service for building generative AI applications and agents. When using on-demand and batch inference in Amazon Bedrock, your requests may be restricted by service quotas or during peak usage times. Cross-region inference enables you to seamlessly manage unplanned traffic bursts by utilizing compute across different AWS Regions. With cross-region inference, you can distribute traffic across multiple AWS Regions, enabling higher throughput. Previously, you were able to choose cross-region inference profiles tied to a specific geography such as the US, EU, or APAC, which automatically selected the optimal commercial AWS Region within that geography to process your inference requests. For your generative AI use cases that do not require you to choose inference profiles tied to a specific geography, you can now use the Global cross-region inference profile to further increase your model throughput. To learn more about global cross-Region inference in Amazon Bedrock, you can visit our documentation on increasing throughput with cross-Region inference, see supported Regions and models for inference profiles, and follow the steps mentioned in the Use an inference profile in model invocation page to get started.
AWS Clean Rooms supports adding new data providers to existing collaborations
AWS Clean Rooms now supports the ability to add data provider members to an existing collaboration, offering customers enhanced flexibility as they iterate on and develop new use cases with their partners. With this launch, you can collaborate with new data providers without having to set up a new collaboration. Collaboration owners can configure an existing Clean Rooms collaboration to add new members that only contribute data, while benefiting from the privacy controls existing members already configured within the collaboration. New data providers invited to an existing collaboration can be reviewed in the change history, enhancing transparency across members. For example, when a publisher creates a Clean Rooms collaboration with an advertiser, they can enable adding new data providers such as a measurement company, which allows the advertiser to enrich their audience segments with third-party data before activating an audience with the publisher. This approach reduces onboarding time while maintaining the existing privacy controls for you and your partners.\n AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
AWS Clean Rooms ML now supports redacted error log summaries
AWS Clean Rooms ML custom modeling enables you and your partners to train and run inference on a custom ML models using collective datasets at scale without having to share your sensitive data or intellectual property. With today’s launch, collaborators can configure a new privacy control that sends redacted error log summaries to specified collaboration members. Error log summaries include the exception type, error message, and line in the code where the error occurred. When associating the model to the collaboration, collaborators can decide and agree which members will receive error log summaries and whether those summaries will contain detectable Personally Identifiable Information (PII), numbers, or custom strings redacted.\n AWS Clean Rooms ML helps you and your partners apply privacy-enhancing controls to safeguard your proprietary data and ML models while generating predictive insights—all without sharing or copying one another’s raw data or models. For more information about the AWS Regions where AWS Clean Rooms ML is available, see the AWS Regions table. To learn more, visit AWS Clean Rooms ML.
Amazon SageMaker Catalog adds support for governed classification with restricted terms
Amazon SageMaker Catalog now supports governed classification through Restricted Classification Terms, allowing catalog administrators to control which users and projects can apply sensitive glossary terms to their assets. This new capability is designed to help organizations enforce metadata standards and ensure classification consistency across teams and domains.\n With this launch, glossary terms can be marked as “restricted”, and only authorized users or groups—defined through explicit policies—can use them to classify data assets. For example, a centralized data governance team may define terms like “Seller-MCF” or “PII” that reflect data handling policies. These terms can now be governed so only specific project members (e.g., trusted admin groups) can apply them, which helps support proper control over how sensitive classifications are assigned. This feature is now available in all AWS regions where Amazon SageMaker Unified Studio is supported. To get started and learn more about this feature, see SageMaker Unified Studio user guide.
AWS Blogs
AWS Japan Blog (Japanese)
- Maximizing the value of smart industrial machines with generative AI and IoT
- Weekly Generative AI with AWS — 2025/8/4
AWS Cloud Operations Blog
AWS Big Data Blog
- Deep dive into the Amazon Managed Service for Apache Fink application lifecycle – Part 2
- Deep dive into the Amazon Managed Service for Apache Fink application lifecycle – Part 1
Containers
AWS Database Blog
Desktop and Application Streaming
AWS for Industries
Artificial Intelligence
- Authenticate Amazon Q Business data accessors using a trusted token issuer
- Unlocking the future of professional services: How Proofpoint uses Amazon Q Business
- Enhancing LLM accuracy with Coveo Passage Retrieval on Amazon Bedrock
- Train and deploy models on Amazon SageMaker HyperPod using the new HyperPod CLI and SDK