4/30/2025, 12:00:00 AM ~ 5/1/2025, 12:00:00 AM (UTC)
Recent Announcements
AWS Resource Explorer is now available in 3 additional AWS Regions
Today, AWS Resource Explorer has expanded the availability of resource search and discovery to 3 additional AWS Regions: Asia Pacific (Malaysia), Asia Pacific (Thailand), and Mexico (Central).\n With AWS Resource Explorer you can search for and discover your AWS resources across AWS Regions and accounts in your organization, either using the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console. For more information about the AWS Regions where AWS Resource Explorer is available, see the AWS Region table. To turn on AWS Resource Explorer, visit the AWS Resource Explorer console. Read about getting started in our AWS Resource Explorer documentation, or explore the AWS Resource Explorer product page.
Amazon SageMaker scheduling experience for Visual ETL and Query editors
Amazon SageMaker now offers a unified scheduling experience for visual ETL flows and queries. The next generation of Amazon SageMaker is the center for all your data, analytics, and AI, and includes SageMaker Unified Studio, a single data and AI development environment. Visual ETL in Amazon SageMaker provides a drag-and-drop interface for building ETL flows and authoring flows with Amazon Q. The query editor tool provides a place to write and run queries, view results, and share your work with your team. This new scheduling experience simplifies the scheduling process for Visual ETL and Query editor users.\n With unified scheduling you can now schedule your workloads with Amazon EventBridge Scheduler from the same visual interface you use to author your query or visual ETL flow. Previously, you needed to create a code-based workflow in order to run a single flow or query on schedule. You can also view, modify or pause/resume these schedules and monitor the runs they invoked. This new feature is now available in all AWS regions where Amazon SageMaker is available. Access the supported region list for the most up-to-date availability information. To learn more, visit our Amazon SageMaker Unified Studio documentation, blog post and Amazon EventBridge Scheduler pricing page.
Amazon RDS announces Cross-Region Automated Backups in five additional AWS Regions
Cross-Region Automated Backup replication for Amazon RDS is now available in five additional AWS Regions. This launch allows you to setup automated backup replication between Australia (Melbourne) and Australia (Sydney); between Asia Pacific (Hong Kong) and Asia Pacific (Singapore) or Asia Pacific (Tokyo); between Asia Pacific (Malaysia) and Asia Pacific (Singapore); between Canada (Central) and Canada West (Calgary); and between Europe (Zurich) and Europe (Frankfurt) or Europe (Ireland) Regions.\n Automated Backups enable recovery capability for mission-critical databases by providing you the ability to restore your database to a specific point in time within your backup retention period. With Cross-Region Automated Backup replication, RDS will replicate snapshots and transaction logs to the chosen destination AWS Region. In the event that your primary AWS Region becomes unavailable, you can restore the automated backup to a point in time in the secondary AWS Region and quickly resume operations. As transaction logs are uploaded to the target AWS Region frequently, you can achieve a Recovery Point Objective (RPO) of within the last few minutes. You can setup Cross-Region Automated Backup replication with just a few clicks on the Amazon RDS Management Console or using the AWS SDK or CLI. Cross-Region Automated Backup replication is available on Amazon RDS for PostgreSQL, Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon RDS for Oracle, and Amazon RDS for Microsoft SQL Server. For more information, including instructions on getting started, read the Amazon RDS documentation.
AWS Elastic Beanstalk adds controls for default security group management
AWS Elastic Beanstalk now gives customers the option to use default security groups or their own custom security groups when deploying applications. This new feature provides greater control over network access and security configurations.\n With this update, customers can use custom security groups instead of default security groups for both new and existing Elastic Beanstalk environments. This applies to the EC2 instances within the environment and, for load-balanced environments, to the load balancer as well. Previously, Elastic Beanstalk would automatically add a default security group. This enhancement enables customized security policies and simplifies security management. This feature is available in all of the AWS Commercial Regions and AWS GovCloud (US) Regions that Elastic Beanstalk supports. For a complete list of regions and service offerings, see AWS Regions. To learn more about using custom security groups with Elastic Beanstalk, see the AWS Elastic Beanstalk Developer Guide. For additional information about Elastic Beanstalk features, visit the AWS Elastic Beanstalk product page.
Amazon Connect launches administrator access for agent schedules
Amazon Connect now lets you grant administrator access to agent schedules, making it easier to address key operational needs with minimal configuration. With this launch, you can now give certain users access to all published agent schedules without being added as a supervisor to every staff group. For example, users such as centralized schedulers or auditors who require a broad view of agent schedules across the organization can now be granted this access in a few clicks, thus reducing time spent on access management and improving overall operational efficiency.\n This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
Amazon OpenSearch Service now supports OpenSearch version 2.19
You can now run OpenSearch version 2.19 in Amazon OpenSearch Service which introduces several improvements in the areas of Vector Search, Observability and OpenSearch Dashboards.\n We have introduced four key capabilities for vector search applications. The Faiss engine now supports AVX512 SIMD instructions, to accelerate vector similarity computations. The ML inference search response processor can now rank search hits and update scores based on model predictions, enabling sophisticated and context-aware document ranking and result augmentation. Lucene binary vectors, now complement existing Faiss engine binary vector support offering greater flexibility for vector search applications. Hybrid search now includes pagination support , reciprocal rank fusion to improve result ranking along with a debugging tool for score and rank normalization process. The launch also introduces query insights dashboards that lets users monitor and analyze the top queries collected by the query Insights plugin. Anomaly detection now offers two key improvements. First, enhanced anomaly definition capabilities allow users to specify multiple criteria to identify both spikes and dips in data patterns. Second, a new dedicated index for flattened results improves query performance and dashboard visualization experience. Finally, you can now use template query to create search queries that contain placeholder variables allowing for more flexible, efficient, and secure search operations. For information on upgrading to OpenSearch 2.19, please see the documentation. OpenSearch 2.19 is now available in all AWS Regions where Amazon OpenSearch Service is available.
Amazon SES Mail Manager now supports Publish to Amazon SNS Topic Rule Action
Amazon Simple Email Service (SES) announces that its Mail Manager email modernization and infrastructure features now have a rule action which enables messages to be published using an Amazon Simple Notification Service (SNS) notification. The notification includes the complete email content, and has options for SNS Topic and Encoding.\n Amazon SNS is a fully managed service that provides message delivery from publishers (producers) to subscribers (consumers). Publishers communicate asynchornously with subscribers by sending messages to a topic, which is a logical access point and communication channel. Subscribers can choose to receive these notifications through a variety of endpoints, including email, SMS, and Lambda. By centralizing notification preferences within SNS, customers enhance messaging between applications and users while gaining advantages of high availability, durability, and flexibility. Using the Publish to SNS rule action within Mail Manager increases the number and type of delivery destinations available to customers as part of their larger ruleset configuration. Mail Manager’s Publish to SNS rule action is available in all 17 AWS Regions where Mail Manager is launched. There is no additional fee from SES to make use of this feature, though charges from AWS for SNS and destination channel activity may apply. Customers can learn more about SES Mail Manager by clicking here.
AWS Clean Rooms now supports multiple results receivers in a collaboration
Today, AWS Clean Rooms announces support for multiple collaboration members to receive analysis results from queries using Spark SQL. This streamlined capability enhances usability and transparency by eliminating the need for additional audit mechanisms outside of the collaboration. With this feature, multiple members can receive and validate analysis results from queries across collective datasets directly from the collaboration.\n You can designate multiple collaborators as result receivers when executing a Spark SQL query. Results are automatically delivered to all selected collaborators who are configured in both the collaboration settings and table controls. For example, in a collaboration between a media publisher and an advertiser, the publisher can run a query across their collective datasets; the query results are sent to both parties’ chosen Amazon S3 location for validation. AWS Clean Rooms helps companies and their partners to easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
Amazon Connect now provides bulk removal of agent schedules
Amazon Connect now provides bulk removal of agent schedules, making day-to-day management of agent schedules more efficient. With this launch, you can now remove schedules for up to 400 agents for a single day, or up to 30 days for a single agent. For example, remove all schedules for next Monday as the contact center is going to be closed, or remove future shifts for an agent who is no longer with the organization. With bulk remove, managers no longer have to remove agent shifts one agent and one day at a time, thus improving manager productivity by reducing time spent on managing agent schedules.\n This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
MAP enhancements to accelerate AI customer adoption
Starting today, we’re enhancing the AWS Migration Acceleration Program (MAP) with two key capabilities to help you accelerate your modernization efforts and drive customers’ adoption of AI:\n
New “Move to AI” Modernization Pathway, featuring Amazon Bedrock and Amazon SageMaker. This pathway enables you to help customers transform their existing applications and business processes with proven AI patterns that deliver measurable business value.
Amazon Connect is now a qualifying service in the MAP Modernization Strategic Partner Incentive (SPI). This enables you to help customers transform their contact centers with AI-powered features that increase agent productivity and enhance customer experiences.
These enhancements strengthen your ability to lead customers’ AI transformation and drive contact center modernization. Learn more:
AWS Partner Funding Benefits Program Guide
MAP Modernization SPI Eligible Services
EC2 Image Builder now integrates with SSM Parameter Store
EC2 Image Builder now integrates with Systems Manager Parameter Store, offering customers a streamlined approach for referencing SSM parameters in their image recipes, components, and distribution configurations. This capability allows customers to dynamically select base images within their image recipes, easily use configuration data and sensitive information for components, and update their SSM parameters with output latest images.\n Prior to today, customers had to specify AMI IDs in their image recipes to use custom base images, leading to a constant maintenance cycle when these base images had to be updated. Furthermore, customers were required to create custom scripts to update SSM parameters with output images and to utilize SSM parameter values in components, resulting in substantial operational overhead. Now, customers can leverage SSM Parameters as inputs for their image recipes, enabling them to dynamically retrieve the latest base image. This integration extends to components, where SSM Parameters can be easily referenced to save, retrieve and use sensitive information in components, and to the distribution process, where SSM parameters can be updated with latest output images. These enhancements streamline the image building workflow, reduce manual intervention and improve overall efficiency. This capability is available to all customers at no additional costs, and is enabled in all AWS commercial regions including AWS GovCloud (US), AWS China (Beijing) Region, operated by Sinnet, and in the AWS China (Ningxia) Region, operated by NWCD. Customers can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder documentation.
Amazon VPC Lattice now supports IPv6 for management endpoints
Amazon VPC Lattice introduces dual stack support for management API, enabling you to connect using Internet Protocol Version 6 (IPv6), Internet Protocol Version 4 (IPv4), or dual stack clients. Dual stack support is also available when the Amazon VPC Lattice management API endpoint is privately accessed from your Amazon Virtual Private Cloud (VPC) using AWS PrivateLink. Dual stack endpoints are made available on a new AWS DNS domain name. The existing Amazon VPC Lattice management API endpoints are maintained for backwards compatibility reasons.\n Amazon VPC Lattice, an application networking service that simplifies connecting, securing, and monitoring service-to-service communication. You can use Amazon VPC Lattice to facilitate cross-account and cross-VPC connectivity, as well as application layer load balancing for your workloads. Whether the underlying compute types are instances, containers, or serverless, with Amazon VPC Lattice developers can work with native integration on the compute platform of their choice. With simultaneous support for both IPv4 and IPv6 clients on VPC Lattice endpoints, you are able to gradually transition from IPv4 to IPv6 based systems and applications, without needing to switch all over at once. This enables you to meet IPv6 compliance requirements and removes the need for expensive networking equipment to handle the address translation between IPv4 and IPv6.
To learn more, see the VPC Lattice user guide and IPv6 on AWS.
Anonymous user access for Q Business
Today, we are excited to announce the general availability of anonymous user access for Amazon Q Business. This feature allows customers to create Q Business applications for anonymous users using publicly accessible content. Q Business applications created in this anonymous mode are billed on a API consumption basis.\n Customers can now create anonymous Q Business applications to power use cases such as public web site Q&A, documentation portals, and customer self-service experiences, where user authentication is not required and content is publicly available. For example, AnyCompany wants to improve their website’s visitor support experience by providing a genAI assistant over their publicly available help/product pages. The customer would create an anonymous Q Business application and index all the public product help/documentation to power their Q Business genAI assistant. To deploy the anonymous application, customers can implement the anonymous Chat/ChatSync APIs for higher UX control or embed the built-in anonymous web experience via an iFrame. Anonymous applications are billed on API consumption basis, offering a scalable way to deploy Q Business generative AI experiences to large anonymous audiences. The anonymous chat APIs and web experience are available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Sydney) AWS Regions. For more information, please consult our documentation.
AWS Elemental MediaTailor introduces recurring ad prefetch for live streaming
Today AWS Elemental MediaTailor launched recurring prefetch schedules, enhancing server-side ad prefetch capabilities for live streaming content. This feature helps content providers and broadcasters efficiently manage ad insertion for live events where exact ad break timing is unpredictable, while maintaining high-quality viewer experiences. Content providers can benefit from this during major sporting events, live concerts, or news broadcasts where ad break timing may vary.\n The new mode enables you to create a single schedule for an entire live event or linear channel. After every ad break, MediaTailor will automatically create the prefetch schedule for the next ad break while ensuring that any new playback sessions that join the stream are provisioned with ads. This automation reduces operational overhead while ensuring optimal ad delivery performance and pre-transcoding of ads to improve ad monetization. The feature also includes ad request traffic shaping settings to prevent ad server overload for viewing events with large audiences and to maintain low latency for manifest delivery. To learn more about AWS Elemental MediaTailor, visit the AWS Elemental MediaTailor page. For detailed implementation guidance on the new Recurring Prefetch Schedule feature, see the MediaTailor documentation.
AWS WAF Targeted Bot Control and Fraud Control is now available in two additional regions
Starting today, you can use AWS WAF Bot and Fraud Control RuleGroup in 2 additional AWS regions: AWS Canada West (Calgary) and AWS Asia Pacific (Malaysia) region.\n AWS WAF Bot Control and Fraud Control deliver comprehensive security for web applications, APIs, and mobile apps. Bot Control protects against automated bot traffic with easy deployment and configurable actions, ensuring scalable management. Fraud Control focuses on preventing account takeovers and fraudulent account creation, leveraging machine learning to reduce financial losses and enhance user trust. Both solutions integrate seamlessly with AWS WAF, providing real-time visibility and detailed metrics for effective protection and operational efficiency. For more information, visit the AWS WAF page. For more information about pricing, visit the AWS WAF Pricing page.
Amazon Kinesis Data Streams now supports tagging and Attribute-Based Access Control for consumers
Today, Amazon Kinesis Data Streams introduces support for tagging and Attribute-Based Access Control (ABAC) for enhanced fan-out consumers. You can register enhanced fan-out consumers to have dedicated low latency read throughput per shard, up to 2MB/s. ABAC is an authorization strategy that defines access permissions based on tags that can be attached to IAM users, roles, and AWS resources for fine-grained access control. This new feature enables you to apply tags for allocating costs and simplifying permission management for your enhanced fan-out consumers.\n With this launch, you can now tag your enhanced fan-out consumers used by different business units to track and allocate costs in AWS Cost Explorer without manually tracking costs per consumer. You can apply tags to enhanced fan-out consumers using the Kinesis Data Streams API or AWS Command Line Interface (CLI). Additionally, ABAC support for enhanced fan-out consumers allows you to use IAM policies to allow or deny specific Kinesis Data Streams API actions when the IAM principal’s tags match the tags on a registered consumer. Tagging and Attribute-Based Access Control for enhanced fan-out consumers are available in all AWS Regions, including the AWS China and AWS GovCloud (US) Regions. To learn more about tagging and ABAC support for consumers, see Tag your resources and Attribute-Based Access Control (ABAC) for AWS.
AWS Config now supports 13 new resource types
AWS Config now supports 13 additional AWS resource types. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.\n With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators.
You can now use AWS Config to monitor the following newly supported resource types in all AWS Regions where the supported resources are available:
Resource Types:
AWS::AppIntegrations::Application
AWS::EC2::EIPAssociation
AWS::EC2::InstanceConnectEndpoint
AWS::EC2::SnapshotBlockPublicAccess
AWS::EC2::VPCEndpointConnectionNotification
AWS::ElastiCache::UserGroup
AWS::InspectorV2::Activation
AWS::Macie::Session
AWS::Route53Profiles::Profile
AWS::OpenSearchServerless::Collection
AWS::S3::StorageLensGroup
AWS::SecurityHub::Standard
AWS::SageMaker::InferenceExperiment
To view the complete list of AWS Config supported resource types, see supported resource types page.
Amazon Cognito adds enhanced context support for machine-to-machine (M2M) authorization flows
Amazon Cognito now allows you to include additional contextual information in the OAuth 2.0 client credentials flow for M2M access token requests, enhancing your control over machine-based interactions. M2M authorization is commonly used for automated processes like data synchronization, event-driven workflows, and microservice communication. This capability enables customers to provide context-specific details (e.g., attributes of the machine such as IP address, location, environment; or business context like application name, tenant ID etc.) when requesting access tokens for machine-based interactions. For example, consider an organization’s internal API service that needs different access patterns across development and production environments. Using ClientMetadata, you can now specify {“environment”: “dev”} or {“environment”: “prod”} when requesting access tokens. With Cognito’s support for pre-token generation Lambda triggers, you can process this context to customize token scopes (e.g., api:read_all, api:write_restricted) and add environment-specific claims like rate limits. The API can then examine these scopes and claims to enforce appropriate access controls and rate limiting.\n Without ClientMetadata parameter, customers would often need separate app clients (e.g., ‘internal-api-dev, ‘internal-api-prod’) to express contextual information, causing app client sprawl. Now, a single M2M app client can include contextual metadata with each request, reducing the need for multiple app clients, optimizing app client cost while providing context-aware authorization. This capability is available to Amazon Cognito customers using the Essentials or Plus tiers in AWS Regions where Cognito is available, including the AWS GovCloud (US) Regions. To learn more, refer to this developer guide and the Pricing Detail Page for M2M authorization flows pricing.
AWS Blogs
AWS Japan Blog (Japanese)
- Protect your data using Amazon FSx for NetApp ONTAP Autonomous Ransomware Protection
- Meta’s Llama 4 model can now be used with Amazon Bedrock Serverless
- Reduce operational overhead today with Amazon CloudFront SaaS Manager
- Writer’s Palmyra X5 and X4 base models are now available on Amazon Bedrock
- AWS Weekly Roundup: Amazon Q Developer, AWS Account Management Updates, and More (April 28, 2025)
- Extending Amazon Q Developer CLI with Model Context Protocol (MCP) for richer context
- Currently in progress — new availability zone in Maryland in the US East (N. Virginia) region
- Use AWS AppSync Events data source integration to enhance real-time applications
- New Amazon EC2 Graviton4-based instances with NVMe SSD storage
- Generative AI at the forefront! AWS Generated AI Event Guide for May
AWS News Blog
AWS Cloud Operations Blog
AWS Big Data Blog
AWS Compute Blog
AWS Database Blog
AWS DevOps & Developer Productivity Blog
- AWS’s Well-Architected Framework Transformed by Amazon Q Developer
- Migrating a CDK v1 Application to CDK v2 with Amazon Q Developer
AWS for Industries
AWS Machine Learning Blog
- Build public-facing generative AI applications using Amazon Q Business for anonymous users
- FloQast builds an AI-powered accounting transformation solution with Anthropic’s Claude 3 on Amazon Bedrock
- Insights in implementing production-ready solutions with generative AI