11/21/2024, 12:00:00 AM ~ 11/22/2024, 12:00:00 AM (UTC)

Recent Announcements

Amazon EC2 added New CPU-Performance Attribute for Instance Type Selection

Starting today, EC2 Auto Scaling and EC2 Fleet customers can express their EC2 instances’ CPU-performance requirements as part of the Attribute-Based Instance Type Selection (ABIS) configuration. With ABIS, customers can already choose a list of instances types by defining a set of desired resource requirements, such as the number of vCPU cores and memory per instance. Now, in addition to the quantitative resource requirements, customers can also identify an instance family that ABIS will use as a baseline to automatically select instance types that offer similar or better CPU performance, enabling customers to further optimize their instance-type selection.\n ABIS is a powerful tool for customers looking to leverage instance type diversification to meet their capacity requirements. For example, customers who use Spot Instances to launch into limited EC2 spare capacity for a discounted price, access multiple instance types to successfully fulfill their larger capacity needs and experience fewer interruptions. With this release, for example, customers can use ABIS in a launch request for instances that can be in the C, M, and R instance classes, with a minimum of 4 vCPUs, and provide CPU performance in line with the C6i instance family, or better. The feature is available in all AWS commercial and the AWS GovCloud (US) Regions. You can use Amazon Management Console, CLI, SDKs, and CloudFormation to update your instance requirements. To get started, refer the user guide for EC2 Auto Scaling and EC2 Fleet.

Amazon S3 Express One Zone is now available in three additional AWS Regions

The Amazon S3 Express One Zone storage class is now available in three additional AWS Regions: Asia Pacific (Mumbai), Europe (Ireland), and US East (Ohio).\n S3 Express One Zone is a high-performance, single-Availability Zone storage class purpose-built to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications. S3 Express One Zone delivers data access speed up to 10x faster and request costs up to 50% lower than S3 Standard. It enables workloads such as machine learning training, interactive analytics, and media content creation to achieve single-digit millisecond data access speed with high durability and availability. S3 Express One Zone is now generally available in seven AWS Regions. For information on AWS service and AWS Partner integrations with S3 Express One Zone, visit the S3 Express One Zone integrations page. To learn more about S3 Express One Zone, visit the S3 User Guide.

Amazon S3 Express One Zone now supports the ability to append data to an object

Amazon S3 Express One Zone now supports the ability to append data to an object. For the first time, applications can add data to an existing object in S3.\n Applications that continuously receive data over a period of time need the ability to add data to existing objects. For example, log-processing applications continuously add new log entries to the end of existing log files. Similarly, media-broadcasting applications add new video segments to video files as they are transcoded and then immediately stream the video to viewers. Previously, these applications needed to combine data in local storage before copying the final object to S3. Now, applications can directly append new data to existing objects and then immediately read the object, all within S3 Express One Zone. You can append data to objects in S3 Express One Zone in all AWS Regions where the storage class is available. You can get started using the AWS SDK, the AWS CLI, or Mountpoint for Amazon S3 (version 1.12.0 or higher). To learn more, visit the S3 User Guide.

Amazon EC2 G6e instances now available in additional regions

Starting today, the Amazon EC2 G6e instances powered by NVIDIA L40S Tensor Core GPUs are now available in Asia Pacific (Tokyo) and Europe (Frankfurt, Spain). G6e instances can be used for a wide range of machine learning and spatial computing use cases. G6e instances deliver up to 2.5x better performance compared to G5 instances and up to 20% lower inference costs than P4d instances.\n Customers can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio. Additionally, the G6e instances will unlock customers’ ability to create larger, more immersive 3D simulations and digital twins for spatial computing workloads. G6e instances feature up to 8 NVIDIA L40S Tensor Core GPUs with 384 GB of total GPU memory (48 GB of memory per GPU) and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 400 Gbps of network bandwidth, up to 1.536 TB of system memory, and up to 7.6 TB of local NVMe SSD storage. Developers can run AI inference workloads on G6e instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Kubernetes Service (Amazon EKS) and AWS Batch, with Amazon SageMaker support coming soon. Amazon EC2 G6e instances are available today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt, Spain) regions. Customers can purchase G6e instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6e instance page.

AWS Application Load Balancer introduces header modification for enhanced traffic control and security

Application Load Balancer (ALB) now supports HTTP request and response header modification giving you greater controls to manage your application’s traffic and security posture without having to alter your application code.\n This feature introduces three key capabilities: renaming specific load balancer generated headers, inserting specific response headers, and disabling server response header. With header rename, you can now rename all ALB generated Transport Layer Security (TLS) headers that the load balancer adds to requests, which includes the six mTLS headers and two TLS headers (version and cipher). This capability enables seamless integration with existing applications that expect headers in a specific format, thereby minimizing changes to your backends while using TLS features on the ALB. With header insertion, you can insert custom headers related to Cross-Origin Resource Sharing (CORS) and critical security headers like HTTP Strict-Transport-Security (HSTS). Finally, the capability to disable the ALB generated “Server” header in responses reduces exposure of server-specific information, adding an extra layer of protection to your application. These response header modification features give you the ability to centrally enforce your organizations security posture at the load balancer instead of enforcement at individual applications, which can be prone to errors. You can configure Header Modification feature using AWS APIs, AWS CLI, or the AWS Management Console. This feature is available for ALBs in all commercial AWS Regions, AWS GovCloud (US) Regions and China Regions. To learn more, refer to the ALB documentation.

Amazon CloudWatch Synthetics now supports Playwright runtime to create canaries with NodeJS

CloudWatch Synthetics, which continuously monitors web applications and APIs by running scripted canaries to help you detect issues before they impact end-users, now supports the Playwright framework for creating NodeJS canaries enabling comprehensive monitoring and diagnosis of complex user journeys and issues that are challenging to automate with other frameworks.\n Playwright is an open-source automation library for testing web applications. You can now create multi-tab workflows in a canary using the Playwright runtime which comes with the advantage of troubleshooting failed runs with logs stored directly to CloudWatch Logs database in your AWS account. This replaces the previous method of storing logs as text files and enables you to leverage CloudWatch Logs Insights for query-based filtering, aggregation, and pattern analysis. You can now query CloudWatch logs for your canaries using the canary run ID or step name, making the troubleshooting process faster and more precise than one relying on timestamp correlation for searching logs. Playwright-based canaries also generate artifacts like reports, metrics, and HAR files, even when canaries times out, ensuring you have the required data needed for root cause analysis in those scenarios. Additionally, the new runtime simplifies canary configuration by allowing customization through a JSON file, removing the need to call a library function in the canary code. Playwright runtime is available for creating canaries in NodeJS in all commercial regions at no additional cost to users. To learn more about the runtime, see documentation, or refer to the user guide to get started with CloudWatch Synthetics.

Amazon S3 Express One Zone now supports S3 Lifecycle expirations

Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, now supports object expiration using S3 Lifecycle. S3 Lifecycle can expire objects based on age to help you automatically optimize storage costs.\n Now, you can configure S3 Lifecycle rules for S3 Express One Zone to expire objects on your behalf. You can configure an S3 Lifecycle expiration rule either for your entire bucket or for a subset of objects by filtering by prefix or object size. For example, you can create an S3 Lifecycle rule that expires all objects smaller than 512 KB after 3 days and another rule that expires all objects in a prefix after 10 days. Additionally, S3 Lifecycle logs S3 Express One Zone object expirations in AWS CloudTrail, giving you the ability to monitor, set alerts for, and audit them. Amazon S3 Express One Zone support for S3 Lifecycle expiration is generally available in all AWS Regions where the storage class is available. You can get started with S3 Lifecycle using the Amazon S3 REST API, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK) client. To learn more about S3 Lifecycle, visit the S3 User Guide.

Announcing new Amazon CloudWatch Metrics for AWS Lambda Event Source Mappings (ESMs)

AWS Lambda announces new Amazon CloudWatch metrics for Lambda Event Source Mappings (ESMs), which provide customers visibility into the processing state of events read by ESMs that subscribe to Amazon SQS, Amazon Kinesis, and Amazon DynamoDB event sources. This enables customers to easily monitor issues or delays in event processing and take corrective actions.\n Customers use ESMs to read events from event sources and invoke Lambda functions. Lack of visibility into processing state of events ingested by ESMs delays diagnosis of event processing issues. Customers can now use the following CloudWatch metrics to monitor the processing state of events ingested by ESMs — PolledEventCount, InvokedEventCount, FilteredOutEventCount, FailedInvokeEventCount, DeletedEventCount, DroppedEventCount, and OnFailureDestinationDeliveredEventCount. PolledEventCount counts the events read by an ESM, and InvokedEventCount counts the events that invoked a Lambda function. FilteredOutEventCount counts the events filtered out by an ESM. FailedInvokeEventCount counts the events that attempted to invoke a Lambda function, but encountered failure. DeletedEventCount counts the events that have been deleted from the SQS queue by Lambda upon successful processing. DroppedEventCount counts the events dropped due to event expiry or exhaustion of retry attempts. OnFailureDestinationDeliveredEventCount counts the events successfully sent to an on-failure destination. This feature is generally available in all AWS Commercial Regions where AWS Lambda is available. You can enable ESM metrics using Lambda event source mapping API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. To learn more about these metrics, visit Lambda developer guide. These new metrics are charged at standard CloudWatch pricing for metrics.

Amazon EC2 C7i-flex and M7i-flex instances are now available in AWS Asia Pacific (Malaysia) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) Flex (C7i-flex, M7i-flex) instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in Asia Pacific (Malaysia) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.\n Flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose and compute intensive workloads. C7i-flex and M7i-flex instances deliver up to 19% better price-performance compared to C6i and M6i instances respectively. These instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don’t fully utilize all compute resources such as web and application servers, virtual desktops, batch-processing, microservices, databases, caches, and more. For workloads that need larger instance sizes (up to 192 vCPUs and 768 GiB memory) or continuous high CPU usage, you can leverage C7i and M7i instances. C7i-flex instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Malaysia, Mumbai, Seoul, Singapore, Sydney, Tokyo), and South America (São Paulo). M7i-flex instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Malaysia, Mumbai, Seoul, Singapore, Sydney, Tokyo), South America (São Paulo), and the AWS GovCloud (US-East, US-West).

Announcing enhanced purchase order support for AWS Marketplace

Today, AWS Marketplace is extending transaction purchase order number support to products with pay-as-you-go pricing, including Amazon Bedrock subscriptions, software as a service (SaaS) contracts with consumption pricing, and AMI annuals. Additionally, you can update purchase order numbers post-subscription prior to invoice creation to ensure your invoices reflect the proper purchase order. This launch helps you allocate costs and makes it easier to process and pay invoices.\n The purchase order feature in AWS Marketplace allows the purchase order number that you provide at the time of the transaction in AWS Marketplace to appear on all invoices related to that purchase. Now, you can provide a purchase order at the time of purchase for most products available in AWS Marketplace, including products with pay-as-you-go pricing. You can add or update purchase orders post-subscription, prior to invoice generation, within the AWS Marketplace console. You can also provide more than one PO for products appearing on your monthly AWS Marketplace invoice and receive a unique invoice for each purchase order. Additionally, you can add a unique PO for each fixed charge and associated AWS Marketplace monthly usage charges at the time of purchase, or post-subscription in the AWS Marketplace console. You can update purchase orders for existing subscriptions under manage subscriptions in the AWS Marketplace console. To enable transaction purchase orders for AWS Marketplace, sign in to the management account (for AWS Organizations) and enable the AWS Billing integration in the AWS Marketplace Console settings. To learn more, read the AWS Marketplace Buyer Guide.

Amazon EC2 R8g instances now available in AWS Europe (Stockholm)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Europe (Stockholm) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.\n AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Mountpoint for Amazon S3 now supports a high performance shared cache

You can now use Amazon S3 Express One Zone as a high performance read cache with Mountpoint for Amazon S3. The cache can be shared by multiple compute instances and can elastically scale to any dataset size. Mountpoint for S3 is a file client that translates local file system API calls to REST API calls on S3 objects. With this launch, Mountpoint for S3 can cache data in S3 Express One Zone after it’s read, making the subsequent read requests up to 7x faster compared to reading data from S3 Standard.\n Previously, Mountpoint for S3 could cache recently accessed data in Amazon EC2 instance storage, EC2 instance memory, or an Amazon EBS volume. This improved performance for repeated read access from the same compute instance for dataset sizes up to the size of the available local storage. Starting today, you can also opt in to caching data in S3 Express One Zone, benefiting applications that repeatedly read a shared dataset across multiple compute instances, without any limits on the total dataset size. Once you opt in, Mountpoint for S3 retains objects with sizes up to one megabyte in S3 Express One Zone. This is ideal for compute-intensive use cases such as machine learning training for computer vision models where applications repeatedly read millions of small images from multiple instances. Mountpoint for Amazon S3 is an open source project backed by AWS support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To get started, visit the GitHub page and product page.

Amazon VPC IPAM now supports enabling IPAM for organizational units within AWS Organizations

Today, AWS announced the ability for Amazon VPC IP Address Manager (IPAM) to be enabled and used for specific organizational units (OUs) within AWS Organizations. This allows you to enable IPAM for specific types of workloads, such as production workloads, or for specific business subsidiaries, that are grouped as OUs in your organization.\n VPC IPAM makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. Typically, you would enable IPAM for the entire organization giving you a unified view of all the IP addresses. In some cases, you may want to enable IPAM only for parts of your organization. For example, you want to enable IPAM for all types of workloads, except sandbox which is isolated from your core-network and contains only experimental workloads. Or, you want to onboard selected business subsidiaries that need IPAM ahead of others in the organization. In such cases, you can use this new feature to enable IPAM for specific parts of your organization that are grouped as OUs. Amazon VPC IPAM is available in all AWS Regions, including China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD), and the AWS GovCloud (US) Regions. To learn more about this feature, view the service documentation. For details on IPAM pricing, refer to the IPAM tab on the Amazon VPC Pricing page.

AWS announces Media Quality-Aware Resiliency for live streaming

Starting today, you can enable Media Quality-Aware Resiliency (MQAR), an integrated capability between Amazon CloudFront and AWS Media Services that provides dynamic, cross-region origin selection and failover based on a dynamically generated video quality score. Built for customers that need always-on ‘eyes-on-glass’ to deliver live events and 24/7 programming channels, MQAR automatically switches between regions in seconds to recover from video quality degradation in one of the regions. This is designed to help deliver a high quality of experience to viewers.\n Previously, you could use a CloudFront origin group to failover between two AWS Elemental MediaPackage origins in different AWS Regions based on HTTP error codes. Now with MQAR, your live event streaming workflow has the resiliency to withstand video quality issues including black frames, freeze or dropped frames, or repeated frames. AWS Elemental MediaLive analyzes the video input delivered from the source and dynamically generates a quality score reflecting perceived changes in video quality. Subsequently, your CloudFront distribution continuously selects the MediaPackage origin that reports the highest quality score. You can create CloudWatch alerts to be notified of quality issues using the provided metrics for quality indicators. To get started with MQAR, deploy a cross-region channel delivery using AWS Media Services and configure CloudFront to use MQAR in the origin group. CloudFormation support will be coming soon. There is no additional cost for enabling MQAR, standard pricing applies for CloudFront and AWS Media Services. To learn more about MQAR, refer to the launch blog and the CloudFront Developer Guide.

Amazon EC2 now provides lineage information for your AMIs

Amazon EC2 now provides source details for your Amazon Machine Images (AMIs). With this lineage information, you can easily trace any copied or derived AMI back to their original AMI source.\n Prior to today, you had to maintain a list of AMIs, use tags, and create custom scripts to track the origins of an AMI. This approach was time-consuming, hard to scale, and resulted in operational overheads. Now with this capability, you can easily view details of the source AMI, making it easier for you to understand from where a particular AMI originated. When copying AMIs across AWS Regions, the lineage information clearly links the copied AMIs to their original AMIs. This new capability provides a more streamlined and efficient way to manage and understand the lineage of AMIs within your AWS environment You can view these details by using the AWS CLI, SDKs, or Console. This capability is available at no additional cost in all AWS Regions, including AWS GovCloud (US) and AWS China Regions. To learn more, please visit our documentation here.

AWS DMS now delivers improved performance for data validation

AWS Database Migration Service (AWS DMS) has enhanced data validation performance for database migrations, enabling customers to validate large datasets with significantly faster processing times.\n This enhanced data validation is now available in version 3.5.4 of the replication engine for both full load and full load with CDC migration tasks. Currently, this enhancement supports migration paths from Oracle to PostgreSQL, SQL Server to PostgreSQL, Oracle to Oracle, and SQL Server to SQL Server, with additional migration paths planned for future releases. To learn more about data validation performance improvements with AWS DMS, please refer to the AWS DMS Technical Documentation.

AWS Marketplace announces improved offer and agreement management capabilities for sellers

AWS Marketplace now offers improved capabilities to help sellers manage agreements and create new offers more efficiently. Sellers can access an improved agreements navigation experience, export details to PDF, and clone past private offers in the AWS Marketplace Management Portal.\n The new agreements experience makes it easier to find agreements for a specific offer or by the customer and take action based on the agreement’s status—active, expiring, expired, replaced, or cancelled. This holistic view enables you to retrieve agreements faster to help you prepare for customer engagements and identify renewal or expansion opportunities. To simplify sharing and offline collaboration, you can now export details into PDF format. Additionally, the new offer cloning capability enables you to replicate common offer configurations from past direct private offers. This gives you the ability to quickly make adjustments for renewals and revisions to ongoing offers. These features are available for all AWS Partners selling SaaS, Amazon Machine Images (AMI), containers, and professional services products in AWS Marketplace. To learn more, visit the AWS Marketplace Seller Guide, or access the AWS Marketplace Management Portal to try the new capabilities.

Amazon CloudWatch Logs launches the ability to transform and enrich logs

Amazon CloudWatch Logs announces log transformation and enrichment to improve log analytics at scale with consistent, and context-rich format. Customers can add structure to their logs using pre-configured templates for common AWS services such as AWS Web Application Firewall (WAF), Route53, or build custom transformers with native parsers such as Grok. Customers can also rename existing attributes and add additional metadata to their logs such as accountId, and region.\n Logs emitted from various sources vary widely in format and attribute names, which makes analysis across sources cumbersome. With today’s launch, customers can simplify their log analytics experience by transforming all their logs into a standardized JSON structure. Transformed logs can be leveraged to accelerate analytics experience using field indexes, discovered fields in CloudWatch Logs Insights, provide flexibility in alarming using metric filters and forwarding via subscription filters. Customers can manage log transformations natively within CloudWatch without needing to setup complex pipelines. Log transformation and enrichment capability is available in all AWS Commercial Regions, and included with existing Standard log class ingestion price. Logs Store (Archival) costs will be based on log size after transformation, which may exceed the original log volume. With a few clicks in the Amazon CloudWatch Console, customers can configure transformers at log group level. Alternatively, customers can setup transformers at account, or log group level using AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), and AWS SDKs. Read the documentation to learn more about this capability.

Amazon RDS for PostgreSQL supports pgvector 0.8.0

Amazon Relational Database Service (RDS) for PostgreSQL now supports pgvector 0.8.0, an open-source extension for PostgreSQL for storing and efficiently querying vector embeddings in your database, letting you use retrieval-augemented generation (RAG) when building your generative AI applications. pgvector 0.8.0 release includes improvements on PostgreSQL query planner’s selection of index when filters are present, which can deliver better query performance and improve search result quality.\n pgvector 0.8.0 release includes a variety of improvements to how pgvector filters data using conditions in WHERE clauses and joins that can improve query performance and usability. Additionally, the iterative index scans help prevent ‘overfiltering’, ensuring generation of sufficient results to satisfy the conditions of a query. If an initial index scan doesn’t satisfy the query conditions, pgvector will continue to search the index until it hits a configurable threshold. This release also has performance improvements for searching and building HNSW indexes. pgvector 0.8.0 is available on database instances in Amazon RDS running PostgreSQL 17.1 and higher, 16.5 and higher, 15.9 and higher, 14.14 and higher, and 13.17 and higher in all applicable AWS Regions.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

Amazon RDS Blue/Green Deployments Green storage fully performant prior to switchover

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now support managed initialization of Green storage volumes that accelerates the loading of storage blocks from Amazon S3. This ensures that the volumes are fully performant prior to switchover of the Green databases. Blue/Green Deployments create a fully managed staging environment, or Green database, by restoring the Blue database snapshot. The Green database allows you to deploy and test production changes, keeping your current production database, or Blue database, safer.\n Previously, you had to manually initialize the storage volumes of the Green databases. With this launch, RDS Blue/Green Deployments will proactively manage and accelerate the storage initialization for your green database instances. You will be able to view the progress of storage initialization using the RDS Console and command line interface (CLI). Managed storage initialization of the Green databases is supported for Blue/Green deployments created for RDS for PostgreSQL, RDS for MySQL, and RDS for MariaDB engines. Amazon RDS Blue/Green Deployments are available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher. In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.

Amazon OpenSearch Service now supports Custom Plugins

Amazon OpenSearch Service introduces Custom Plugins, a new plugin management option that allows you to extend OpenSearch functionality and deliver personalized experiences for applications such as website search, log analytics, application monitoring and, observability. OpenSearch provides a rich set of search and analysis capabilities, and with custom plugins, you can extend these further to meet your business needs.\n Until now, you had to build and operate your own search infrastructure to support applications that required customization in areas like language analysis, custom filtering, ranking and more. With this launch, you can run custom plugins on Amazon OpenSearch Service that allow you to extend the Search and Analysis functions of OpenSearch. You can use the OpenSearch Service console or APIs to upload and associate search and analysis plugins with your domains. OpenSearch Service validates plugin package for version compatibility, security, and permitted plugin operations. Custom plugins are now supported on all OpenSearch Service domains running OpenSearch version 2.15 or later, and are available in 14 regions globally: US West (Oregon), US East (Ohio), US East (N. Virginia), South America (Sao Paulo), Europe (Paris), Europe (London), Europe (Ireland), Europe (Frankfurt), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Seoul) and Asia Pacific (Mumbai). To get started with custom plugins, visit our documentation. To learn more about Amazon OpenSearch Service, please visit the product page.

Amazon RDS for MySQL now supports MySQL 8.4 LTS release

Amazon RDS for MySQL now supports MySQL major version 8.4, the latest long-term support (LTS) release from the MySQL community. RDS for MySQL 8.4 is integrated with AWS Libcrypto (AWS-LC) FIPS module (Certificate #4816), and includes support for multi-source replication plugin for analytics, Group Replication plugin for continuous availability, as well as several performance and feature improvements added by the MySQL community. Learn more about the community enhancements in the MySQL 8.4 release notes.\n You can leverage Amazon RDS Managed Blue/Green deployments to upgrade your databases from MySQL 8.0 to MySQL 8.4. Learn more about RDS for MySQL 8.4 features and upgrade options, including Managed Blue/Green deployments in the Amazon RDS User Guide. Amazon RDS for MySQL 8.4 is now available in all AWS Commercial and the AWS GovCloud (US) Regions. Amazon RDS for MySQL makes it straightforward to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL 8.4 database in the Amazon RDS Management Console.

Amazon CloudFront announces origin modifications using CloudFront Functions

Amazon CloudFront now supports origin modification within CloudFront Functions, enabling you to conditionally change or update origin servers on each request. You can now write custom logic in CloudFront Functions to overwrite origin properties, use another origin in your CloudFront distribution, or forward requests to any public HTTP endpoint.\n Origin modification allows you to create custom routing policies for how traffic should be forwarded to your application servers on cache misses. For example, you can use origin modification to determine the geographic location of a viewer and then forward the request, on cache misses, to the closest AWS Region running your application. This ensures the lowest possible latency for your application. Previously, you had to use AWS Lambda@Edge to modify origins, but now this same capability is available in CloudFront Functions with better performance and lower costs. Origin modification supports updating all existing origin capabilities such as setting custom headers, adjusting timeouts, setting Origin Shield, or changing the primary origin in origin groups. Origin modification is now available within CloudFront Functions at no additional charge. For more information, see the CloudFront Developer Guide. For examples of how to use origin modification, see our GitHub examples repository.

Amazon RDS for PostgreSQL supports minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22

Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community.\n You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance window. Learn more about upgrading your database instances in the Amazon RDS User Guide. Additionally, starting with PostgreSQL major version 18, Amazon RDS for PostgreSQL will deprecate plcoffee and plls PostgreSQL extensions. We recommend that you stop using Coffee scripts and LiveScript in your applications, ensuring you have an upgrade path for future. Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

AWS Backup for Amazon S3 adds new restore parameter

AWS Backup introduces a new restore parameter for Amazon S3 backups, offering you the ability to choose how many versions of an object to restore.\n By default, AWS Backup restores only the latest version of objects from the version stack at any point in time. The new parameter will now allow you to recover all versions of your data by restoring the entire version stack. You can also recover just the latest version(s) of an object without the overhead of restoring all older versions. With this feature, you now have more flexibility to control the data recovery process of Amazon S3 buckets/prefixes from your Amazon S3 backups, tailoring restore jobs to your requirements. This feature is available in all Regions where AWS Backup for Amazon S3 is available. For more information on Regional availability and pricing, see the AWS Backup pricing page. To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.

Introducing an AWS Management Console Visual Update (Preview)

Now available in Preview, the visual update in the AWS Management Console helps customers scan content, focus on the key information, and find what they are looking for more effectively, while preserving the familiar and consistent experience. The new, modern layout also provides easy access to contextual tools.\n Customers now benefit from optimized information density that maximizes available content on screen, allowing them to see more content at a glance. Thanks to a reduced visual complexity, crisper styles and improved use of color, the experience is more intuitive, readable, and efficient. We modernized the interface, with rounder shapes and a new family of illustrations, complemented by added motion to bring moments of delight. While introducing these visual enhancements, we continue to offer a predictable experience that adheres to the highest accessibility standards. The visual update is available in selected consoles across all AWS Regions, with the latest version of Cloudscape Design System. We will be extending the update across all services. Visit the AWS Management Console to experience the visual update.

AWS Elastic Beanstalk adds support for Ruby 3.3

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Ruby 3.3 on AL2023 adds support for a new parser, a new pure-Ruby just-in-time compiler and several performance improvements. You can create Elastic Beanstalk environment(s) running Ruby 3.3 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API.\n This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions. For more information about Ruby and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

Amazon SQS increases in-flight limit for FIFO queues from 20K to 120K

Amazon SQS increases the in-flight limit for FIFO queues from 20K to 120K messages. When a message is sent to an SQS FIFO queue, it is added to the queue backlog. Once you invoke a receive request on the FIFO queue, the message is now marked as in-flight and remains in-flight until a delete message request is invoked.\n With this change to the in-flight limit, your receivers can now process a maximum of 120K messages concurrently, increased from 20K previously, via SQS FIFO queues. If you have sufficient publish throughput and were constrained by the 20K in-flight limit, you can now process up to 120K messages at a time by scaling your receivers. The increased in-flight limits is available in all commercial and the AWS GovCloud (US) Regions where SQS FIFO queues are available. To get started, see the following resources:

High-Throughput SQS FIFO queues, in the Amazon SQS Developer Guide

SQS FIFO queue quotas, in the Amazon SQS Developer Guide

Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region

Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region. With this launch, Amazon MQ is now available in 34 regions.\n Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite or modify your applications. For more information, please visit the Amazon MQ product page, and see the AWS Region Table for complete regional availability.

AWS Partner Network automates Foundational Technical Reviews using Amazon Bedrock

Today, AWS is announcing automation for the Foundational Technical Review (FTR) process using Amazon Bedrock. The new generative AI-driven automation process for the FTR optimizes the review timeline for AWS Partners, offering review decisions in minutes, accelerating a process that previously could take weeks. Gaining FTR approval allows Partners to fast-track their AWS Partner journey, unlocking access to AWS Partner Network (APN) programs and co-sell opportunities with AWS.\n Partners seeking access to AWS funding programs, the AWS Competency Program to validate expertise, and the AWS ISV Accelerate Program for co-sell support must qualify their solutions by completing the FTR. With this launch, AWS has automated the FTR and enhanced the experience for Partners, with successful reviews being approved in minutes. Unsuccessful reviews will be forwarded for manual review, and an AWS expert will make contact within two weeks to remediate potential gaps. Partners will receive an email notification informing them of the review result, reducing wait time from weeks to minutes. Additionally, partners will be able to submit responses in several non-English languages, saving time for translation and improving the accuracy of their submissions. This generative AI-based automation accelerates the technical validation step, allowing Partners to spend more time on business initiatives. AWS Partners can request the FTR for their solution in AWS Partner Central. To learn more about the FTR, sign in to AWS Partner Central and download the FTR Guide (software or service solution).

Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency

Today, Amazon ElastiCache introduces support for Valkey 8.0, the latest Valkey major version. This release brings faster scaling for ElastiCache Serverless for Valkey and improved memory efficiency on node-based ElastiCache, compared to previous versions of ElastiCache for Valkey and Redis OSS. Valkey is an open-source, high-performance key-value datastore stewarded by the Linux Foundation and is a drop-in replacement for Redis OSS. Backed by over 40 companies, Valkey has seen rapid adoption since its inception in March 2024.\n Hundreds of thousands of customers use ElastiCache to scale their applications, improve performance, and optimize costs. ElastiCache Serverless version 8.0 for Valkey scales to 5 million requests per second (RPS) per cache in minutes, up to 5x faster than Valkey 7.2, with microsecond read latency. With node-based ElastiCache, you can benefit from improved memory efficiency, with 32 bytes less memory per key compared to ElastiCache version 7.2 for Valkey and ElastiCache for Redis OSS. AWS has made significant contributions to open source Valkey in the areas of performance, scalability, and memory optimizations, and we are bringing these benefits into ElastiCache version 8.0 for Valkey. ElastiCache version 8.0 for Valkey is now available in all AWS regions. You can upgrade from ElastiCache version 7.2 for Valkey or any ElastiCache for Redis OSS version to ElastiCache version 8.0 for Valkey in a few clicks without downtime. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page, blog and documentation.

Announcing Commands feature for AWS IoT Device Management

Today, AWS IoT Device Management announced the general availability of the Commands feature, a managed capability that allows developers to build innovative applications where users can perform remote command and control actions on targeted devices and track the status of those executions. With this feature, you can send instructions, trigger device actions, or modify device configuration settings on-demand, simplifying the development of consumer facing applications.\n Using the Commands feature, you can set fine-grained access controls, timeout settings, and receive real-time updates and notifications for each command execution, without having to manually create and manage MQTT topics, payload formats, Rules, Lambda functions, and status tracking. In addition, the feature supports custom payload formats, allowing you to define and store command entities as AWS resources for recurring use. The AWS IoT Device Management commands feature is available in all AWS Regions where AWS IoT Device Management is offered. To learn more, see technical documentation. To get started, log in to the AWS IoT Management Console or use the CLI.

AWS Lambda supports application performance monitoring (APM) via CloudWatch Application Signals

AWS Lambda now supports Amazon CloudWatch Application Signals, an application performance monitoring (APM) solution, enabling developers and operators to easily monitor the health and performance of their serverless applications built using Lambda.\n Customers want an easy way to quickly identify and troubleshoot performance issues to minimize the mean time to recovery (MTTR) and operational costs of running serverless applications. Now, Application Signals provides pre-built, standardized dashboards for critical application metrics (such as throughput, availability, latency, faults, and errors), correlated traces, and interactions between the Lambda function and its dependencies (such as other AWS services), without requiring any manual instrumentation or code changes from developers. This gives operators a single-pane-of-glass view of the health of the application and enables them to drill down to establish the root cause of performance anomalies. You can also create Service Level Objectives (SLOs) in Application Signals to closely track the performance KPIs of critical operations in your application, enabling you to easily identify and triage operations that do not meet your business KPIs. Application Signals auto-instruments your Lambda function using enhanced AWS Distro for OpenTelemetry (ADOT) libraries, delivering better performance (cold start latency and memory consumption) than before. To get started, visit the Configuration tab in Lambda console and enable Application Signals for your function with just one click in the “Monitoring and operational tools” section. To learn more, visit the launch blog post, Lambda developer guide, and Application Signals developer guide. Application Signals for Lambda is available in all commercial AWS Regions where Lambda and CloudWatch Application Signals are available.

AWS Glue Data Catalog now supports Apache Iceberg automatic table optimization through Amazon VPC

AWS Glue Data Catalog now supports automatic optimization of Apache Iceberg tables that can be only accessed from a specific Amazon Virtual Private Cloud (VPC) environment. You can enable automatic optimization by providing a VPC configuration to optimize storage and improve query performance while keeping your tables secure.\n AWS Glue Data Catalog supports compaction, snapshot retention and unreferenced file management that help you reduce metadata overhead, control storage costs and improve query performance. Customers who have governance and security configurations that require an Amazon S3 bucket to reside within a specific VPC can now use it with Glue Catalog. This gives you broader capabilities for automatic management of your Apache Iceberg data, regardless of where it’s stored on Amazon S3. Automatic optimization for Iceberg tables through Amazon VPC is available in 13 AWS regions US East (N. Virginia, Ohio), US West (Oregon), Europe (Ireland, London, Frankfurt, Stockholm), Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney), South America (São Paulo). Customers can enable this through the AWS Console, AWS CLI, or AWS SDKs. To get started, you can now provide the Glue network connection as an additional configuration along with optimization settings such as default retention period and days to keep unreferenced files. The AWS Glue Data Catalog will use the VPC information in the Glue connection to access Amazon S3 buckets and optimize Apache Iceberg tables. To learn more, read the blog, and visit the AWS Glue Data Catalog documentation.

Amazon CloudWatch Internet Monitor adds AWS Local Zones support for VPC subnets

Today, Amazon CloudWatch Internet Monitor introduces support for select AWS Local Zones. Now, you can monitor internet traffic performance for VPC subnets deployed in Local Zones.\n With this new feature, you can also view optimization suggestions that include Local Zones. On the Optimize tab in the Internet Monitor console, select the toggle to include Local Zones in traffic optimization suggestions for your application. Additionally, you can compare your current configuration with other supported Local Zones. Select the option to see more optimization suggestions, and then choose specific Local Zones to compare. By comparing latency differences, you can determine the proposed best configuration for your traffic. At launch, CloudWatch Internet Monitor supports the following Local Zones: us-east-1-dfw-2a, us-east-1-mia-2a, us-east-1-qro-1a, us-east-1-lim-1a, us-east-1-atl-2a, us-east-1-bue-1a, us-east-1-mci-1a, us-west-2-lax-1a, us-west-2-lax-1b, and af-south-1-los-1a. To learn more, visit the Internet Monitor user guide documentation.

Introducing Prompt Optimization in Preview in Amazon Bedrock

Today we are announcing the preview launch of Prompt Optimization in Amazon Bedrock. Prompt Optimization rewrites prompts for higher quality responses from foundational models.\n Prompt engineering is the process of designing prompts to guide foundational models to generating relevant responses. These prompts need to be tailored for each specific foundational model, following best practices and guidelines for each model. Developers can now use Prompt Optimization in Amazon Bedrock to rewrite their prompts for improved performance on Claude Sonnet 3.5, Claude Sonnet, Claude Opus, Claude Haiku, Llama 3 70B, Llama 3.1 70B, Mistral Large 2 and Titan Text Premier models. Developers can easily compare the performance of optimized prompts against the original prompts without the need of any deployment. All optimized prompts are saved as part of Prompt Builder for developers to use for their generative AI applications. Amazon Bedrock Prompt Optimization is now available in preview. Learn more here.

Amazon CloudWatch launches full visibility into application transactions

AWS announces the general availability of an enhanced search and analytics experience in CloudWatch Application Signals. This feature empowers developers and on-call engineers with complete visibility into application transaction spans, which are the building blocks of distributed traces that capture detailed interactions between users and various application components.\n This feature offers 3 core benefits. First, developers can answer any questions related to application performance or end-user impact through an interactive visual editor and enhancements to Logs Insights queries. They can correlate spans with end-user issues using attributes like customer name or order number. With the new JSON parse and unnest functions in Logs Insights, they can link transactions to business events such as failed payments and troubleshoot. Second, developers can diagnose rarely occurring issues, such as p99 latency spikes in APIs, with the enhanced troubleshooting capabilities in Amazon CloudWatch Application Signals that correlates application metrics with comprehensive transaction spans. Finally, CloudWatch Logs offers advanced features for transaction spans, including data masking, forwarding via subscription filters, and metric extraction. You can enable these capabilities for existing spans sent to X-Ray or by sending spans to a new OTLP (OpenTelemetry Protocol) endpoint for traces. This allows you to enhance your observability while maintaining flexibility in your setup. You can search and analyze spans in all regions where Application Signals is available. A new pricing option is also available , encompassing Application Signals, X-Ray traces, and complete visibility into transaction spans - see Amazon CloudWatch pricing. Refer to documentation for more details.

The new AWS Systems Manager experience: Simplifying node management

The new AWS Systems Manager experience helps you scale operational efficiency by simplifying node management, making it easier to manage nodes running anywhere— whether it’s EC2 instances, hybrid servers, or servers running in a multicloud environment. The new AWS Systems Manager experience gives you a comprehensive, centralized view to easily manage all of your nodes at scale.\n With this launch, you can now see all managed and unmanaged nodes across your organizations’ AWS accounts and Regions from a single place. You can also identify, diagnose, and remediate unmanaged nodes. Once remediated, meaning they are managed by Systems Manager, you can leverage the full suite of Systems Manager tools to patch nodes with security updates, securely connect to nodes without managing SSH keys or bastion hosts, automate operational commands at scale, and gain comprehensive visibility across your entire fleet. Systems Manager is also now integrated with Amazon Q Developer which extends your ability to see and control your nodes from anywhere in the AWS console. For example, you can ask Amazon Q to “show me managed instances running Amazon Linux 1” to quickly get the information you need for operational investigations. It’s the same powerful Systems Manager many customers rely on, improved and simplified to help you save time and effort. The new Systems Manager experience is available in AWS Regions found here.

Get started now at no additional cost and easily enable the new experience in Systems Manager. For more information, visit the Systems Manager product page and user guide.

Enhanced account linking experience across AWS Marketplace and AWS Partner Central

Today, AWS announces an improved account linking experience for AWS Partners to create and connect their AWS Marketplace accounts with AWS Partner Central, as well as onboarding associated users. Account Linking allows Partners to seamlessly navigate between Partner Central and Marketplace Management Portal using Single Sign-On (SSO), connect Partner Central solutions to AWS Marketplace listings, link private offers to opportunities for tracking deals from pipeline to customer offers, and access AWS Marketplace insights within centralized AWS Partner Analytics Dashboard. Linking accounts also unlocks access to valuable Amazon Partner Network (APN) program benefits such as ISV Accelerate and accelerated sales cycles.\n The new account linking experience introduces three major improvements to streamline the self-guided linking workflow. First, it simplifies the process to associate your AWS account with AWS Marketplace by registering your legal business name. Second, it automates the creation and bulk assignment of Identity and Access Management (IAM) roles to AWS Partner Central users, eliminating the need for manual creation in the AWS IAM console. Third, it introduces three new AWS managed policies to simplify permission management for AWS Partner Central and Marketplace access. The new policies offer fine-grained access options, ranging from full Partner Central access to personalized access to co-sell or marketplace offer management. This new experience is available for all AWS Partners. To get started, navigate to the “Account Linking” feature on the AWS Partner Central homepage. To learn more, review the AWS Partner Central documentation.

Amazon EC2 C6a and R6a instances now available in additional AWS region

Starting today, compute optimized Amazon EC2 C6a and memory optimized Amazon EC2 R6a instances are now available in Asia Pacific (Hyderabad) region. C6a and R6a instances are powered by third-generation AMD EPYC processors with a maximum frequency of 3.6 GHz. C6a instances deliver up to 15% better price performance than comparable C5a instances, and R6a deliver up to 35% better price performance than comparable R5a instances. These instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.\n With this additional region, C6a instances are available in the following AWS Regions: US East (Northern Virginia, Ohio), US West (Oregon, N. California), Asia Pacific (Hong Kong, Mumbai, Singapore, Sydney, Tokyo, Hyderabad), Canada (Central), Europe (Frankfurt, Ireland, London), and South America (Sao Paulo) and R6a instances are available in the following AWS Regions: US East (Northern Virginia, Ohio), US West (Oregon, N. California), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo, Hyderabad), and Europe (Frankfurt, Ireland). These instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the C6a instances, and R6a instances pages.

Amazon API Gateway now supports Custom Domain Name for private REST APIs

Amazon API Gateway (APIGW) now gives you the ability to manage your private REST APIs using custom user-friendly private DNS name like private.example.com, simplifying API discovery. This feature enhances your security posture by continuing to encrypt your private API traffic with Transport Layer Security (TLS), while providing full control over managing the lifecycle of the TLS certificate associated with your domain.\n API providers can get started with this feature in four simple steps using APIGW console and/or API(s). First, create a private custom domain. Second, configure an Amazon Certificate Manager (ACM) provided or imported certificate for the domain. Third, map multiple private APIs using base path mappings. Fourth, control invokes to the domain using resource policies. API providers can optionally share the domain across accounts using Amazon Resource Access Manager (RAM) to provide consumers the ability to access APIs from different accounts. Once a domain is shared using RAM, a consumer can use VPC endpoint(s) to invoke multiple private custom domains across accounts. Custom domain name for private REST APIs is now available on API Gateway in all AWS Regions, including the AWS GovCloud (US) Regions. Please visit the API Gateway documentation and AWS blog post to learn more.

AWS CloudTrail Lake launches enhanced analytics and cross-account data access

AWS announces two significant enhancements to CloudTrail Lake, a managed data lake that enables you to aggregate, immutably store, and analyze your activity logs at scale:\n

Comprehensive dashboard capabilities: A new “Highlights” dashboard provides an at-a-glance overview of your AWS activity logs including AI-powered insights (AI-powered insights is in preview). Additionally, we have added 14 new pre-built dashboards catering to various use cases such as security and operational monitoring. These dashboards provide a starting point to analyze trends, detect anomalies, and conduct efficient investigations across your AWS environments. For example, the security dashboard displays top access denied events, failed console login attempts, and more. You can also create custom dashboards with scheduled refreshes, tailoring your monitoring to specific needs.

Cross-account sharing of event data stores: This feature allows you to securely share your event data stores with select IAM identities using Resource-Based Policies (RBP). These identities can then query the shared event data store within the same AWS Region where the event data store was created, facilitating more comprehensive analysis across your organization while maintaining security.

These features are available in all AWS Regions where AWS CloudTrail Lake is supported, except AI-powered insights on the “Highlights" dashboard, which is in preview in N. Virginia, Oregon, and Tokyo Regions. While these enhancements are available at no additional cost, standard CloudTrail Lake query charges apply when running queries to generate results or create visualizations for the CloudTrail Lake dashboards. To learn more, visit the AWS CloudTrail documentation or read our News Blog.

Amazon CloudWatch Synthetics now automatically deletes Lambda resources associated with canaries

Amazon CloudWatch Synthetics, an outside-in monitoring capability which continually verifies your customers’ experience by running snippets of code on AWS Lambda called canaries, will now automatically delete your associated Lambda resources when you try to delete Synthetics canaries minimizing the manual upkeep required to manage AWS resources in your account.\n CloudWatch Synthetics creates Lambdas to execute canaries to monitor the health and performance of your web applications or API endpoints. When you delete a canary the Lambda function and its layers are no longer usable. With the release of this feature these Lambdas will be automatically removed when a canary is deleted, reducing the need for additional housekeeping in maintaining your Synthetics canaries. Canaries deleted via AWS console will automatically cleanup related lambda resources. Any new canaries created via CLI/SDK or CFN will automatically opt-in to this feature whereas canaries created before this launch need to be explicitly opted in. This feature is available in all commercial regions, the AWS GovCloud (US) Regions, and China regions at no additional cost to the customers. To learn more about the delete behavior of canaries, see the documentation, or refer to the user guide and One Observability Workshop to get started with CloudWatch Synthetics.

Amazon Polly launches more synthetic generative voices

Today, we are excited to announce the general availability of seven highly expressive Amazon Polly Generative voices in English, French, Spanish, German, and Italian.\n Amazon Polly is a fully-managed service that turns text into lifelike speech, allowing you to create applications that talk and to build engaging speech-enabled products depending on your business needs. Amazon Polly releases two new female-sounding voices (Indian English Kajal and Italian Bianca) and five new male-sounding generative voices: i.e., US Spanish Pedro, Mexican Spanish Andrés, European Spanish Sergio, German Daniel, and French Rémi. This launch not only expands the Polly Generative engine to twenty voices, but also offers a unique feature where the five new male-sounding voices have the same voice identity as the US English voice Matthew. The polyglot capability of the voice combined with high expressivity will be useful for customers with a global outreach. The same voice identity can speak multiple languages natively so that the end customers enjoy an accent-less switch from one language to another. Kajal, Bianca, Pedro, Andrés, Sergio, Daniel, and Rémi generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions. To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.

AWS HealthOmics workflows now support call caching and intermediate file access

We are excited to announce that AWS HealthOmics workflows now support the ability to reuse task results from previous runs, saving time and compute costs for customers. AWS HealthOmics is a fully managed service that empowers healthcare and life science organizations to store, query, analyze omics data to generate insights to improve health and drive scientific discoveries. With this release, customers can accelerate development of new pipelines by resuming runs from a previous point of failure or code change.\n Call caching, or the ability to resume runs, enables customers to restart runs from the point where new code changes are introduced, skipping unchanged tasks that have already been computed to enable faster iterative workflow development cycles. In addition, task intermediate files are stored in a run cache, enabling advanced debugging and troubleshooting of workflow errors during development. In production workflows, call caching saves partial results from failed runs so that customers can rerun the sample from the point of failure, rather than computing successfully completed tasks again, shortening reprocessing times. Call caching is now supported for Nextflow, WDL, and CWL workflow languages in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv). To get started with call caching, see the AWS HealthOmics documentation.

AWS AppSync launches AI gateway capabilities with new Amazon Bedrock integration in AppSync GraphQL

AWS AppSync, a fully managed API management service that connects applications to events, data, and AI models. Today, customers use AppSync as an AI gateway to trigger generative AI workflows and use subscriptions, powered by WebSockets, to return progressive updates from long-running invocations. This allows them to implement asynchronous patterns. However, in some cases, customers need to make short synchronous invocations to their models. AWS AppSync now supports Amazon Bedrock runtime as a data source for GraphQL APIs, enabling seamless integration of generative AI capabilities. This new feature allows developers to make short synchronous invocations (10 seconds or less) to foundation models and inference profiles in Amazon Bedrock directly from their AppSync GraphQL APIs.\n The integration supports calling the converse and invokeModel APIs. Developers can interact with Anthropic models like Claude 3.5 Haiku and Claude 3.5 Sonnet for data analysis and structured object generation tasks. They can also use Amazon Titan models to generate embeddings, create summaries, or extract action items from meeting minutes. For longer-running invocations, customers can continue using AWS Lambda functions in event mode to interact with Bedrock models and send progressive updates to clients via subscriptions. This new data source is available in all AWS Regions where AWS AppSync is available. To get started, customers can visit the AWS AppSync console and refer to the AWS AppSync documentation for more information.

Amazon MWAA adds smaller environment size

Amazon Managed Workflows for Apache Airflow (MWAA) now offers a micro environment size, giving customers of the managed service the ability to create multiple, independent environments for development and data isolation at a lower cost.\n Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. With Amazon MWAA micro environments, customers can now create smaller, cost-effective environments that are more efficient for development use, as well as for teams that require data isolation with lightweight workflow requirements. You can create a micro size Amazon MWAA environment with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about larger environments in Amazon MWAA, visit the Launch Blog. To learn more about Amazon MWAA visit the Amazon MWAA documentation.

Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

AWS Elastic Beanstalk adds support for Node.js 22

AWS Elastic Beanstalk now supports building and deploying Node.js 22 applications on AL2023 Beanstalk environments.\n AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Node.js 22 on AL2023 provides updates to the V8 JavaScript engine, improved garbage collection and performance improvements. You can create Elastic Beanstalk environment(s) running Node.js 22 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API. This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions. For more information about Node.js and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

AWS CloudFormation Hooks now allows AWS Cloud Control API resource configurations evaluation

AWS CloudFormation Hooks now allow you to evaluate resource configurations from AWS Cloud Control API (CCAPI) create and update operations. Hooks allow you to invoke custom logic to enforce security, compliance, and governance policies on your resource configurations. CCAPI is a set of common application programming interfaces (APIs) that is designed to make it easy for developers to manage their cloud infrastructure in a consistent manner and leverage the latest AWS capabilities faster. By extending Hooks to CCAPI, customers can now inspect resource configurations prior to CCAPI create and update operations, and block or warn the operations if there is a non-compliant resource found.\n Before this launch, customers would publish Hooks that would only be invoked during CloudFormation operations. Now, customers can extend their resource Hook evaluations beyond CloudFormation to CCAPI based operations. Customers with existing resource Hooks, or who are using the recently launched pre-built Lambda and Guard hooks, simply need to specify “Cloud_Control” as a target in the hooks’ configuration. Hooks is available in all AWS Commercial Regions. The CCAPI support is available for customers who use CCAPI directly or third-party IaC tools that have CCAPI providers support. To get started, refer to Hooks user guide and CCAPI user guide for more information. Learn the detail of this feature from this AWS DevOps Blog.

Accelerate AWS CloudFormation troubleshooting with Amazon Q Developer assistance

AWS CloudFormation now offers generative AI assistance powered by Amazon Q Developer to help troubleshoot unsuccessful CloudFormation deployments. This new capability provides easy-to-understand analysis and actionable steps to simplify the resolution of the most common resource provisioning errors encountered during CloudFormation deployments.\n When creating or modifying a CloudFormation stack, CloudFormation can encounter errors in resource provisioning, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a failed stack operation could be a time-consuming process. After identifying the root cause of the failure, you had to search through blogs and documentation for solutions and determine the next steps, leading to longer resolution times. Now, when you review a failed stack operation in the CloudFormation Console, CloudFormation automatically highlights the likely root cause of the failure. You can click the “Diagnose with Q” button in the error alert box and Amazon Q Developer will provide a human-readable analysis of the error, helping you understand what went wrong. If you need further assistance, you can click the “Help me resolve” button to receive actionable resolution steps tailored to your specific failure scenario, helping you accelerate resolution of the error. To get started, open the CloudFormation Console and navigate to the stack events tab for a provisioned stack. This feature is available in AWS Regions where AWS CloudFormation and Amazon Q Developer are available. Refer to the AWS Region table for service availability details. Visit our user guide to learn more about this feature.

Amazon CloudWatch Logs announces field indexes and enhanced log group selection in Logs Insights

Amazon CloudWatch Logs introduces field indexes and enhanced log group selection to accelerate log analysis. Now, you can index critical log attributes like requestId and transactionId to accelerate query performance and scan relevant indexed data. This means faster troubleshooting, and easier identification of trends. You can create up to 20 field indexes per log group, and once defined, all future logs matching the defined fields will remain indexed for up to 30 days. Additionally, CloudWatch Logs Insights now supports querying up to 10,000 log groups, across one or more accounts linked via cross-account observability.\n Customers using field indexes, will benefit from faster query execution times while searching across vast amounts of logs. CloudWatch Logs Insights queries using “filter field = value” syntax will automatically leverage indexes, when available. When combined with enhanced log group selection, customers can now gain faster insights across a much larger set of logs in Logs Insights. Customers can select up to 10,000 log groups via either log group prefix or “All” log groups option. To further optimize query performance and costs, customers can use the new “filterIndex” command to limit queries to indexed data only. Field indexes are available in all AWS Regions where CloudWatch Logs is available and are included as part of standard log class ingestion at no additional cost.

To get started, define index policy at account level or per log-group level within AWS console, or programmatically via API/CLI. See documentation to learn more about field indexes.

AWS announces support for predictive scaling for Amazon ECS services

Today, AWS announces support for predictive scaling for Amazon Elastic Container Service (Amazon ECS). Predictive scaling leverages advanced machine learning algorithms to proactively scale your Amazon ECS services ahead of demand surges, reducing overprovisioning costs while improving application responsiveness and availability.\n Amazon ECS offers a rich set of service auto scaling options, including target tracking and step scaling policies, that automatically adjust task counts in response to observed load, as well as scheduled scaling to manually define rules to adjust capacity for routine demand patterns. Many applications observe recurring patterns of steep demand changes, such as early morning spikes when business resumes, wherein a reactive scaling policy can be slow to respond. Predictive scaling is a new capability that harnesses advanced machine learning algorithms, pre-trained on millions of data points, to proactively scale out ECS services ahead of anticipated demand surges. You can use predictive scaling alongside your existing auto scaling policies, such as target tracking or step scaling, so that your applications scale based on both real-time and historic patterns. You can also choose a “forecast only” mode to evaluate its accuracy and suitability, before enabling it to “forecast and scale“. Predictive scaling enhances responsiveness and availability for applications with recurring demand patterns, while also reducing the operational effort of manually configuring scaling policies and the costs from overprovisioning.

You can use AWS management console, SDK, CLI, CloudFormation, and CDK to configure predictive auto scaling for your ECS services. For a list of supported AWS Regions, see documentation. To learn more, visit this blog post and documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

AWS Compute Blog

Containers

AWS Database Blog

AWS DevOps & Developer Productivity Blog

Front-End Web & Mobile

AWS for Industries

AWS Machine Learning Blog

AWS for M&E Blog

Open Source Project

AWS CLI

AWS CDK

Amplify for Android

Amplify UI