11/21/2025, 12:00:00 AM ~ 11/24/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview)
Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview). With Spark 4.0.1, you can build and maintain data pipelines more easily with ANSI SQL and VARIANT data types, strengthen compliance and governance frameworks with Apache Iceberg v3 table format, and deploy new real-time applications faster with enhanced streaming capabilities. This enables your teams to reduce technical debt and iterate more quickly, while ensuring data accuracy and consistency.\n With Spark 4.0.1, you can build data pipelines with standard ANSI SQL, making it accessible to a larger set of users who don’t know programming languages like Python or Scala. Spark 4.0.1 natively supports JSON and semi-structured data through VARIANT data types, providing flexibility for handling diverse data formats. You can strengthen compliance and governance through Apache Iceberg v3 table format, which provides transaction guarantees and tracks how your data changes over time, creating the audit trails you need for regulatory requirements. You can deploy real-time applications faster with improved streaming controls that let you manage complex stateful operations and monitor streaming jobs more easily. With this capability, you can support use cases like fraud detection and real-time personalization. Apache Spark 4.0.1 is available in preview in all regions where EMR Serverless is available, excluding China and AWS GovCloud (US) regions. To learn more about Apache Spark 4.0.1 on Amazon EMR, visit the Amazon EMR Serverless release notes, or get started by creating an EMR application with Spark 4.0.1 from the AWS Management Console.
Amazon Athena for Apache Spark is now available in Amazon SageMaker notebooks
Amazon SageMaker now supports Amazon Athena for Apache Spark, bringing a new notebook experience and fast serverless Spark experience together within a unified workspace. Now, data engineers, analysts, and data scientists can easily query data, run Python code, develop jobs, train models, visualize data, and work with AI from one place, with no infrastructure to manage and second-level billing.\n Athena for Apache Spark scales in seconds to support any workload, from interactive queries to petabyte-scale jobs. Athena for Apache Spark now runs on Spark 3.5.6, the same high-performance Spark engine available across AWS, optimized for open table formats including Apache Iceberg and Delta Lake. It brings you new debugging features, real-time monitoring in the Spark UI, and secure interactive cluster communication through Spark Connect. As you use these capabilities to work with your data, Athena for Spark now enforces table-level access controls defined in AWS Lake Formation.
Athena for Apache Spark is now available with Amazon SageMaker notebooks in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, visit Apache Spark engine version 3.5, read the AWS News Blog or visit Amazon SageMaker documentation. Visit the Getting Started guide to try it from Amazon SageMaker notebooks.
AWS Payments Cryptography announces support for post-quantum cryptography to secure data in transit
Today, AWS Payments Cryptography announces support for hybrid post-quantum (PQ) TLS to secure API calls. With this launch, customers can future-proof transmissions of sensitive data and commands using ML-KEM post-quantum cryptography.\n Enterprises operating highly regulated workloads wish to reduce post-quantum risks from “harvest now, decrypt later”. Long-lived data-in-transit can be recorded today, then decrypted in the future when a sufficiently capable quantum computer becomes available. With today’s launch, AWS Payment Cryptography joins data protection services such as AWS Key Management Service (KMS) in addressing this concern by supporting PQ-TLS. To get started, simply ensure that your application depends on a version of AWS SDK or browser that supports PQ-TLS. For detailed guidance by language and platform, visit the PQ-TLS enablement documentation. Customers can also validate that ML-KEM was used to secure the TLS session for an API call by reviewing tlsDetails for the corresponding CloudTrail event in the console or a configured CloudTrail trail. These capabilities are generally available in all AWS Regions at no added cost. To get started with PQ-TLS and Payment Cyptography, see our post-quantum TLS guide. For more information about PQC at AWS, please see PQC shared responsibility.
Announcing a Fully Managed Appium Endpoint for AWS Device Farm
AWS Device Farm enables mobile and web developers to test their apps using real mobile devices and desktop browsers. Starting today, you can connect to a fully managed Appium endpoint using only a few lines of code and run interactive tests on multiple physical devices directly from your IDE or local machine. This feature also seamlessly works with third-party tools such as Appium Inspector — both hosted and local versions — for all actions including element inspection.\n Support for live video and log streaming enables you to get faster test feedback within your local workflow. It complements our existing server-side execution which gives you the scale and control to run secure enterprise-grade workloads. Taken together, Device Farm now offers you the ability to author, inspect, debug, test, and release mobile apps faster, whether from your IDE, AWS Console, or other environments.
To learn more, see Appium Testing in AWS Device Farm Developer Guide.
EC2 Image Builder now supports auto-versioning and enhances Infrastructure as Code experience
Amazon EC2 Image Builder now supports automatic versioning for recipes and automatic build version incrementing for components, reducing the overhead of managing versions manually. This enables you to increment versions automatically and dynamically reference the latest compatible versions in your pipelines without manual updates.\n With automatic versioning, you no longer need to manually track and increment version numbers when creating new versions of your recipes. You can simply place a single ‘x’ placeholder in any position of the version number, and Image Builder detects the latest existing version and automatically increments that position. For components, Image Builder automatically increments the build version when you create a component with the same name and semantic version. When referencing resources in your configurations, wildcard patterns automatically resolve to the highest available version matching the specified pattern, ensuring your pipelines always use the latest versions. Auto-versioning is available in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions. You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK. Refer to documentation to learn more about recipes, components and semantic versioning.
Automated Reasoning checks now include natural language test Q&A generation
AWS announces the launch of natural language test Q&A generation for Automated Reasoning checks in Amazon Bedrock Guardrails. Automated Reasoning checks uses formal verification techniques to validate the accuracy and policy compliance of outputs from generative AI models. Automated Reasoning checks deliver up to 99% accuracy at detecting correct responses from LLMs, giving you provable assurance in detecting AI hallucinations while also assisting with ambiguity detection in model responses. \n To get started with Automated Reasoning checks, customers create and test Automated Reasoning policies using natural language documents and sample Q&As. Automated Reasoning checks generates up to N test Q&As for each policy using content from the input document, reducing the work required to go from initial policy generation to production-ready, refined policy. Test generation for Automated Reasoning checks is now available in the US (N. Virginia), US (Ohio), US (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris) Regions. Customers can access the service through the Amazon Bedrock console, as well as the Amazon Bedrock Python SDK. To learn more about Automated Reasoning checks and how you can integrate it into your generative AI workflows, please read the Amazon Bedrock documentation, review the tutorials on the AWS AI blog, and visit the Bedrock Guardrails webpage.
AWS IoT Core enhances IoT rules-SQL with variable setting and error handling capabilities
AWS IoT Core now supports a SET clause in IoT rules-SQL, which lets you set and reuse variables across SQL statements. This new feature provides a simpler SQL experience and ensures consistent content when variables are used multiple times. Additionally, a new get_or_default() function provides improved failure handling by returning default values while encountering data encoding or external dependency issues, ensuring IoT rules continue execution successfully.\n AWS IoT Core is a fully managed service that securely connects millions of IoT devices to the AWS cloud. Rules for AWS IoT is a component of AWS IoT Core which enables you to filter, process, and decode IoT device data using SQL-like statements, and route the data to 20+ AWS and third-party services. As you define an IoT rule, these new capabilities help you eliminate complicated SQL statements and make it easy for you to manage IoT rules-SQL failures.
These new features are available in all AWS Regions where AWS IoT Core is available, including AWS GovCloud (US) and Amazon China Regions. For more information and getting started experience, visit the developer guides on SET clause and get_or_default() function.
Amazon Connect launches monitoring of contacts queued for callback
Amazon Connect now provides you with the ability to monitor which contacts are queued for callback. This feature enables you to search for contacts queued for callback and view additional details such as the customer’s phone number and duration of being queued within the Connect UI and APIs. You can now pro-actively route contacts to agents that are at risk of exceeding the callback timelines communicated to customers. Businesses can also identify customers that have already successfully connected with agents, and clear them from the callback queue to remove duplicative work.\n This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage.
Aurora DSQL launches new Python, Node.js, and JDBC Connectors that simplify IAM authorization
Today we are announcing the release of Aurora DSQL Connectors for Python, Node.js, and JDBC that simplify IAM authorization for customers using standard PostgreSQL drivers to connect to Aurora DSQL clusters. These connectors act as transparent authentication layers that automatically handle IAM token generation, eliminating the need to write token generation code or manually supply IAM tokens. The connectors work seamlessly with popular PostgreSQL drivers including psycopg and psycopg2 for Python, node-postgres and Postgres.js for Node.js, and the standard PostgreSQL JDBC driver, while supporting existing workflows, connection pooling libraries (including HikariCP for JDBC and built-in pooling for Node.js and Python), and frameworks like Spring Boot.\n The Aurora DSQL Connectors streamline authentication and eliminate security risks associated with traditional user-generated passwords. By automatically generating IAM tokens for each connection using valid AWS credentials and the AWS SDK, the connectors ensure valid tokens are always used while maintaining full compatibility with existing PostgreSQL driver features. The above connectors are available in all Regions where Aurora DSQL is available. To get started, visit the Connectors for Aurora DSQL documentation page. For code examples, visit our Github page for node-postgres, Postgres.js, psycopg and psycopg2, and JDBC. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.
Amazon EMR 7.12 now supports the Apache Iceberg v3 table format
Amazon EMR 7.12 is now available featuring the new Apache Iceberg v3 table format with Apache Iceberg 1.10. This release enables you to reduce costs when deleting data, strengthen governance and compliance through better tracking for row level changes, and enhance data security with more granular data access control.\n With Iceberg v3, you can delete data cost-effectively because Iceberg v3 marks deleted rows without rewriting entire files - speeding up your data pipelines while reducing storage costs. You get better governance and compliance capabilities through automatic tracking of every row’s creation and modification history, creating the audit trails needed for regulatory requirements and change data capture. You can enhance data security with table-level encryption, helping you meet privacy regulations for your most sensitive data. With Apache Spark 3.5.6 included in this release, you can leverage these Iceberg 1.10 capabilities for building robust data lakehouse architectures on Amazon S3. This release also includes support for data governance operations across your Iceberg tables using AWS Lake Formation. In addition, this release also includes Apache Trino 476. Amazon EMR 7.12 is available in all AWS Regions that support Amazon EMR. To learn more about Amazon EMR 7.12 release, visit the Amazon EMR 7.12 release documentation.
Second-generation AWS Outposts racks now supported in the AWS Asia Pacific (Tokyo) Region
Second-generation AWS Outposts racks are now supported in the AWS Asia Pacific (Tokyo) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience.\n Organizations from startups to enterprises and the public sector in and outside of Japan can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.
Amazon Aurora DSQL database clusters now support up to 256 TiB of storage volume
Amazon Aurora DSQL now supports a maximum storage limit of 256 TiB, doubling the previous limit of 128 TiB. Now, customers can store and manage larger datasets within a single database cluster, simplifying data management for large-scale applications. With Aurora DSQL, customers only pay for the storage they use and storage automatically scales with usage, ensuring that customers do not need to provision storage upfront.\n All Aurora DSQL clusters by default have a storage limit of 10 TiB. Customers that desire clusters with higher storage limits can request a limit increase using either the Service Quotas console or AWS CLI. Visit the Service Quotas documentation for a step-by-step guide to requesting a quota increase. The increased storage limits are available in all Regions where Aurora DSQL is available. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage and documentation.
AWS WAF announces Web Bot Auth support
Today, we’re excited to announce the addition of Web Bot Auth (WBA) support in AWS WAF, providing a secure and standardized way to authenticate legitimate AI agents and automated tools accessing web applications. This new capability helps distinguish trusted bot traffic from potentially harmful automated access attempts.\n Web Bot Auth is an authentication method that leverages cryptographic signatures in HTTP messages to verifythat a request comes from an automated bot. Web Bot Auth is used as a verification method for verified bots and signed agents. It relies on two active IETF drafts: a directory draft allowing the crawler to share their public keys, and a protocol draft defining how these keys should be used to attach crawler’s identity to HTTP requests.
AWS WAF now automatically allows verified AI agent traffic Verified WBA bots will now be automatically allowed by default, previously Category AI blocked unverified bots, this behavior is now refined to respect WBA verification.
To learn more, please review the documentation.
AWS announces Flexible Cost Allocation on AWS Transit Gateway
AWS announces general availability of Flexible Cost Allocation on AWS Transit Gateway, enhancing how you can distribute Transit Gateway costs across your organization.\n Previously, Transit Gateway only used a sender-pay model, where the source attachment account owner was responsible for all data usage related costs. The new Flexible Cost Allocation (FCA) feature provides more versatile cost allocation options through a central metering policy. Using FCA metering policy, you can choose to allocate all of your Transit Gateway data processing and data transfer usage to the source attachment account, the destination attachment account, or the central Transit Gateway account. FCA metering policies can be configured at an attachment-level or individual flow-level granularity. FCA also supports middle-box deployment models enabling you to allocate data processing usage on middle-box appliances such as AWS Network Firewall to the original source or destination attachment owners. This flexibility allows you to implement multiple cost allocation models on a single Transit Gateway, accommodating various chargeback scenarios within your AWS network infrastructure. Flexible Cost Allocation is available in all commercial AWS Regions where Transit Gateway is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for using FCA on Transit Gateway. For more information, see the Transit Gateway documentation pages.
Amazon Athena adds cost and performance controls for Capacity Reservations
Amazon Athena now gives you control over Data Processing Unit (DPU) usage for queries running on Capacity Reservations. You can now configure DPU settings at the workgroup or query level to balance cost efficiency, concurrency, and query-level performance needs.\n Capacity Reservations provides dedicated serverless processing capacity for your Athena queries. Capacity is measured in DPUs, and queries consume DPUs based on their complexity. Now you can set explicit DPU values for each query—ensuring small queries use only what they need while guaranteeing critical queries get sufficient resources for fast execution. The Athena console and API now return per-query DPU usage, helping you understand DPU usage and determine your capacity needs. These updates help you control per-query capacity usage, control query concurrency, reduce costs by eliminating over-provisioning, and deliver consistent performance for business-critical workloads. Cost and performance controls are available today in AWS Regions where Capacity Reservations is supported. To learn more, see Control capacity usage in the Athena user guide.
AWS Security Incident Response now provides agentic AI-powered investigation
AWS Security Incident Response now provides agentic AI-powered investigation capabilities to help you prepare for, respond to, and recover from security events faster and more effectively. The new investigative agent automatically gathers evidence across multiple AWS data sources, correlates the data, then presents findings for you in clear, actionable summaries. This helps you reduce the time required to investigate and respond to potential security events, thereby minimizing business disruption.\n When a security event case is created in the Security Incident Response console, the investigative agent immediately assesses the case details to identify missing information, such as potential indicators, resource names, and timeframes. It asks the case submitter clarifying questions to gather these details. This proactive approach helps minimize delays from back-and-forth communications that traditionally extend case resolution times. The investigative agent then collects relevant information from various data sources, such as AWS CloudTrail, AWS Identity and Access Management (IAM), Amazon EC2, and AWS Cost Explorer. It automatically correlates this data to provide you with a comprehensive analysis, reducing the need for manual evidence gathering and enabling faster investigation. Security teams can track all investigation activities directly through the AWS console and view summaries in their preferred integration tools. This feature is automatically enabled for all Security Incident Response customers at no additional cost in all AWS Regions where the service is available. To learn more and get started, visit the Security Incident Response overview page and console.
Amazon Location Service introduces Address Form Solution Builder
Today, AWS announced Address Form Solution Builder from Amazon Location Service, enabling developers to build a customized address form, without writing any code, that helps their users enter their address with predictive suggestions, autofill address fields such as postal code, and customizable layout. This guided experience allows developers to generate a ready-to-use application in minutes and download the developer package in React JavaScript, React Typescript, or Standalone HTML/JavaScript.\n Developers can use address forms to improve the user experience, speed, and accuracy of collecting address information from their users. Features such as predictive suggestions helps end-users select their complete address after just a few keystrokes, reducing the data entry time and error rate. The integrated map view lets users visualize their selected address’s location and adjust the placement of the pin on the map to indicate a specific entrance. By improving the speed and accuracy of address collection, enterprises can improve their customer experience, reduce fraud, and increase delivery success rate. Amazon Location Service’s Address Form Solution Builder is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). Build your first address form using the Amazon Location Console or learn more about this feature in our Developer Guide.
AWS Cost Anomaly Detection accelerates anomaly identification
AWS Cost Anomaly Detection now features an improved detection algorithm that enables faster identification of unusual spending patterns. The enhanced algorithm analyzes your AWS spend using rolling 24-hour windows, comparing current costs against equivalent time periods from previous days each time AWS receives updated cost and usage data.\n The enhanced algorithm addresses two common challenges in cost pattern analysis. First, it removes the delay in anomaly detection caused by comparing incomplete calendar-day costs against historical daily totals. The rolling window always compares full 24-hour periods, enabling faster identification of unusual patterns. Second, it provides more accurate comparisons by evaluating costs against similar times of day, accounting for workloads that have different morning and evening usage patterns. These improvements help reduce false positives while enabling faster, more accurate anomaly detection. This enhancement to AWS Cost Anomaly Detection is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about this new feature, AWS Cost Anomaly Detection, and how to reduce your risk of spend surprises, visit the AWS Cost Anomaly Detection product page and getting started guide.
AWS Transfer Family announces Terraform module to integrate with a custom identity provider
The AWS Transfer Family Terraform module now supports deploying Transfer Family endpoints with a custom identity provider (IdP) for authentication and access control. This allows you to automate and streamline the deployment of Transfer Family servers integrated with your existing identity providers.\n AWS Transfer Family provides fully-managed file transfers over SFTP, AS2, FTPS, FTP, and web browser-based interfaces for AWS storage services. Using this new module, you can now use Terraform to provision Transfer Family server resources using your custom authentication systems, eliminating manual configurations and enabling repeatable deployments that scale with your business needs. The module is built on the open source Custom IdP solution which provides standardized integration with widely-used identity providers and includes built-in security controls such as multi-factor authentication, audit logging, and per-user IP allowlisting. To help you get started, the Terraform module includes an end-to-end example using Amazon Cognito user pools. Customers can get started by using the new module from the Terraform Registry. To learn more about the Transfer Family Custom IdP solution, visit the user guide. To see all the regions where Transfer Family is available, visit the AWS Region table.
Introducing one-click onboarding of existing datasets to Amazon SageMaker
Amazon SageMaker introduces one-click onboarding of existing AWS datasets to Amazon SageMaker Unified Studio. This helps AWS customers to start working with their data in minutes, using their existing AWS Identity and Access Management (IAM) roles and permissions. Customers can start working with any data they have access to using a new serverless notebook with a built-in AI agent. This new notebook, which supports SQL, Python, Spark or natural language, gives data engineers, analysts, and data scientists a single high-performance interface to develop and run both SQL queries and code. Customers also have access to many other existing tools such as a Query Editor for SQL analysis, JupyterLab IDE, Visual ETL and workflows, and machine learning (ML) capabilities. The ML capabilities include the ability to discover foundation models from a centralized model hub, customize them with sample notebooks, use MLflow for experimentation, publish trained models in the model hub for discovery, and deploy them as inference endpoints for prediction.\n Customers can start directly from Amazon SageMaker, Amazon Athena, Amazon Redshift, and Amazon S3 Tables console pages, giving them a fast path from their existing tools and data to the simple experience in SageMaker Unified Studio. After clicking ‘Get started’ and specifying an IAM role, SageMaker prompts for specific policy updates and then automatically creates a project in SageMaker Unified Studio. The project is set up with all existing data permissions from AWS Glue Data Catalog, AWS Lake Formation, and Amazon S3, and a notebook and serverless compute are pre-configured to accelerate first use. To get started, simply click “Get Started” from the SageMaker console or open SageMaker Unified Studio from Amazon Athena, Amazon Redshift, or Amazon S3 Tables. One-click onboarding of existing datasets is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more read the AWS News Blog or visit the Amazon SageMaker documentation.
Introducing Amazon SageMaker Data Agent for analytics and AI/ML development
Amazon SageMaker introduces a built-in AI agent that accelerates the development of data analytics and machine learning (ML) applications. SageMaker Data Agent is available in the new notebook experience in Amazon SageMaker Unified Studio and helps data engineers, analysts, and data scientists who spend significant time on manual setup tasks and boilerplate code when building analytics and ML applications. The agent generates code and execution plans from natural language prompts and integrates with data catalogs and business metadata to streamline the development process.\n SageMaker Data Agent works within the new notebook experience to break down complex analytics and ML tasks into manageable steps. Customers can describe objectives in natural language and the agent creates a detailed execution plan and generates the required SQL and Python code. The agent maintains awareness of the notebook context, including available data sources and catalog information, accelerating common tasks including data transformation, statistical analysis, and model development. To get started, log in to Amazon SageMaker and click on “Notebooks” on the left navigation. Amazon SageMaker Data Agent is available in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, read the AWS News Blog or visit the Amazon SageMaker documentation.
AWS CloudFormation StackSets now supports deployment ordering
AWS CloudFormation StackSets offers deployment ordering for auto-deployment mode, enabling you to define the sequence in which your stack instances automatically deploy across accounts and regions. This capability allows you to coordinate complex multi-stack deployments where foundational infrastructure must be provisioned before dependent application components. Organizations managing large-scale deployments can now ensure proper deployment ordering without manual intervention.\n When creating or updating a CloudFormation StackSet, you can specify up to 10 dependencies per stack instances using the new DependsOn parameter in the AutoDeployment configuration, allowing StackSets to automatically orchestrate deployments based on your defined relationships. For example, you can make sure that your networking and security stack instance complete deployment before your application stack instances begin, preventing deployment failures due to missing dependencies. StackSets includes built-in cycle detection to prevent circular dependencies and provides error messages to help resolve configuration issues. This feature is available in all AWS Regions where CloudFormation StackSets is available at no additional cost. Get started by creating or updating your StackSets auto-deployement option through the CLI, SDK or the CloudFormation Console to define dependencies using stack instances ARNs. To learn more about StackSets deployment ordering, check out the detailed feature walkthrough on the AWS DevOps Blog or visit the AWS CloudFormation User Guide.
AWS introduces new VPC Encryption Controls and further raises the bar on data encryption
AWS launches VPC Encryption Controls to make it easy to audit and enforce encryption in transit within and across Amazon Virtual Private Clouds (VPC), and demonstrate compliance with encryption standards. You can turn it on your existing VPCs to monitor encryption status of traffic flows and identify VPC resources that are unintentionally allowing plaintext traffic. This feature also makes it easy to enforce encryption across different network paths by automatically (and transparently) turning on hardware-based AES-256 encryption on traffic between multiple VPC resources including AWS Fargate, Network Load Balancers, and Application Load Balancers.\n To meet stringent compliance standards like HIPAA and PCI DSS, customers rely on both application layer encryption and the hardware-based encryption that AWS offers across different network paths. AWS provides hardware-based AES-256 encryption transparently between modern EC2 Nitro instances. AWS also encrypts all network traffic between AWS data centers in and across Availability Zones, and AWS Regions before the traffic leaves our secure facilities. All inter-region traffic that uses VPC Peering, Transit Gateway Peering, or AWS Cloud WAN receives an additional layer of transparent encryption before leaving AWS data centers. Prior to this release, customers had to track and confirm encryption across all network paths. With VPC Encryption Controls, customers can now monitor, enforce and demonstrate encryption within and across Virtual Private Clouds (VPCs) in just a few clicks. Your information security team can turn it on centrally to maintain a secure and compliant environment, and generate audit logs for compliance and reporting. VPC Encryption Controls is now available in the following AWS Commercial regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Milan), Europe (Zurich), Europe (Stockholm), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Melbourne), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Canada West (Calgary), Canada (Central), Middle East (UAE), Middle East (Bahrain), Africa (Cape Town) and South America (São Paulo). To learn more about this feature and its use cases, please see our documentation.
AWS License Manager introduces license asset groups for centralized software asset management
AWS License Manager now provides centralized software asset management across AWS regions and accounts in an organization, reducing compliance risks and streamlines license tracking through automated license asset groups. Customers can now track license expiry dates, streamline audit responses, and make data-driven renewal decisions with a product-centric view of their commercial software portfolio.\n With this launch, customers no longer need to manually track licenses across multiple regions and accounts in their organization. Now with license asset groups, customers can gain organization-wide visibility of their commercial software usage with customizable grouping and automated reporting. The new feature is available in all commercial regions where AWS License Manager is available. To get started, visit the Licenses section of the AWS License Manager console, and the AWS License Manager User Guide.
Amazon EKS add-ons now supports the AWS Secrets Store CSI Driver provider
Today, AWS announces the general availability of the AWS Secrets Store CSI Driver provider EKS add-on. This new integration allows customers to retrieve secrets from AWS Secrets Manager and parameters from AWS Systems Manager Parameter Store and mount them as files on their Kubernetes clusters running on Amazon Elastic Kubernetes Service (Amazon EKS). The add-on installs and manages the AWS provider for the Secrets Store CSI Driver.\n Now, with the new Amazon EKS add-on, customers can quickly and easily set up new and existing clusters using automation to leverage AWS Secrets Manager and AWS Systems Manager Parameter Store, enhancing security and simplifying secrets management. Amazon EKS add-ons are curated extensions that automate the installation, configuration, and lifecycle management of operational software for Kubernetes clusters, simplifying the process of maintaining cluster functionality and security. Customers rely on AWS Secrets Manager to securely store and manage secrets such as database credentials and API keys throughout their lifecycle. To learn more about Secrets Manager, visit the documentation. For a list of regions where Secrets Manager is available, see the AWS Region table. To get started with Secrets Manager, visit the Secrets Manager home page. This new Amazon EKS add-on is available in all AWS commercial and AWS GovCloud (US) Regions. To get started, see the following resources:
Amazon EKS add-ons user guide
AWS Secrets Manager user guide
AWS Control Tower now supports seven new compliance frameworks and 279 additional AWS Config rules
Today, AWS Control Tower announces support for an additional 279 managed Config rules in Control Catalog for various use cases such as security, cost, durability, and operations. With this launch, you can now search, discover, enable and manage these additional rules directly from AWS Control Tower and govern more use cases for your multi-account environment. AWS Control Tower also supports seven new compliance frameworks in Control Catalog. In addition to existing frameworks, most controls are now mapped to ACSC-Essential-Eight-Nov-2022, ACSC-ISM-02-Mar-2023, AWS-WAF-v10, CCCS-Medium-Cloud-Control-May-2019, CIS-AWS-Benchmark-v1.2, CIS-AWS-Benchmark-v1.3, CIS-v7.1\n To get started, go to the Control Catalog and search for controls with the implementation filter AWS Config to view all AWS Config rules in the Catalog. You can enable relevant rules directly using the AWS Control Tower console or the ListControls, GetControl and EnableControl APIs. We’ve also enhanced control relationship mapping, helping you understand how different controls work together. The updated ListControlMappings API now reveals important relationships between controls - showing which ones complement each other, are alternatives, or are mutually exclusive. For instance, you can now easily identify when a Config Rule (detection) and a Service Control Policy (prevention) can work together for comprehensive security coverage. These new features are available in AWS Regions where AWS Control Tower is available, including AWS GovCloud (US). Reference the list of supported regions for each Config rule to see where it can be enabled. To learn more, visit the AWS Control Tower User Guide.
CloudWatch Database Insights adds cross-account cross-region monitoring
Amazon CloudWatch Database Insights now supports cross-account and cross-region database fleet monitoring, enabling centralized observability across your entire AWS database infrastructure. This enhancement allows DevOps engineers and database administrators to monitor, troubleshoot, and optimize databases spanning multiple AWS accounts and regions from a single unified console experience.\n With this new capability, organizations can gain holistic visibility into their distributed database environments without account or regional boundaries. Teams can now correlate performance issues across their entire database fleet, streamline incident response workflows, and maintain consistent monitoring standards across complex multi-account architectures, significantly reducing operational overhead and improving mean time to resolution. This feature is available in all AWS commercial regions where CloudWatch Database Insights is supported. To learn more about cross-account and cross-region monitoring in CloudWatch Database Insights, as well as instructions to get started monitoring your databases across your entire organization and regions, visit the CloudWatch Database Insights documentation.
Amazon OpenSearch Service OR2 and OM2 now available in additional Regions
Amazon OpenSearch Service, expands availability of OR2 and OM2, OpenSearch Optimized Instance family to 11 additional regions. The OR2 instance delivers up to 26% higher indexing throughput compared to previous OR1 instances and 70% over R7g instances. The OM2 instance delivers up to 15% higher indexing throughput compared to OR1 instances and 66% over M7g instances in internal benchmarks.\n The OpenSearch Optimized instances, leveraging best-in-class cloud technologies like Amazon S3, to provide high durability, and improved price-performance for higher indexing throughput better for indexing heavy workload. Each OpenSearch Optimized instance is provisioned with compute, local instance storage for caching, and remote Amazon S3-based managed storage. OR2 and OM2 offers pay-as-you-go pricing and reserved instances, with a simple hourly rate for the instance, local instance storage, as well as the managed storage provisioned. OR2 instances come in sizes ‘medium’ through ‘16xlarge’, and offer compute, memory, and storage flexibility. OM2 instances come in sizes ‘large’ through ‘16xlarge’ Please refer to the Amazon OpenSearch Service pricing page for pricing details. OR2 instance family is now available on Amazon OpenSearch Service across 11 additional regions globally: US West (N. California), Canada (Central), Asia Pacific (Hong Kong, Jakarta , Malaysia, Melbourne, Osaka , Seoul, Singapore), Europe (London), and South America (Sao Paulo). OM2 instance family is now available on Amazon OpenSearch Service across 14 additional regions globally: US West (N. California), Canada (Central), Asia Pacific (Hong Kong, Hyderabad, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Europe ( Paris, Spain), Middle East (Bahrain), South America (Sao Paulo).
Amazon ECR now supports managed container image signing
Amazon ECR now supports managed container image signing to enhance your security posture and eliminate the operational overhead of setting up signing. Container image signing allows you to verify that images are from trusted sources. With managed signing, ECR simplifies setting up container image signing to just a few clicks in the ECR Console or a single API call.\n To get started, create a signing rule with an AWS Signer signing profile that specifies parameters such as signature validity period, and which repositories ECR should sign images for. Once configured, ECR automatically signs images as they are pushed using the identity of the entity pushing the image. ECR leverages AWS Signer for signing operations, which handles key material and certificate lifecycle management including generation, secure storage, and rotation. All signing operations are logged through CloudTrail for full auditability. ECR managed signing is available in all AWS Regions where AWS Signer is available. To learn more, visit the documentation.
Amazon EKS and Amazon ECS announce fully managed MCP servers in preview
Today, Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS) announced fully managed MCP servers enabling AI powered experiences for development and operations in preview. MCP (Model Context Protocol) provides a standardized interface that enriches AI applications with real-time, contextual knowledge of EKS and ECS clusters, enabling more accurate and tailored guidance throughout the application lifecycle, from development through operations. With this launch, EKS and ECS now offer fully managed MCP servers hosted in the AWS cloud, eliminating the need for local installation and maintenance. The fully managed MCP servers provide enterprise-grade capabilities like automatic updates and patching, centralized security through AWS IAM integration, comprehensive audit logging via AWS CloudTrail, and the proven scalability, reliability, and support of AWS.\n The fully managed Amazon EKS and ECS MCP servers enable developers to easily configure AI coding assistants like Kiro CLI, Cursor, or Cline for guided development workflows, optimized code generation, and context-aware debugging. Operators gain access to a knowledge base of best practices and troubleshooting guidance derived from extensive operational experience managing clusters at scale. To learn more about the Amazon EKS MCP server preview, visit EKS MCP server documentation and launch blog post. To learn more about the Amazon ECS MCP server preview, visit ECS MCP server documentation and launch blog post.
Announcing AWS Compute Optimizer automation rules
Today, we are introducing automation rules, a new feature in AWS Compute Optimizer that enables you to optimize Amazon Elastic Block Store (EBS) volumes at scale. With automation rules, you can streamline the process of cleaning up unattached EBS volumes and upgrading volumes to the latest-generation volume types, saving cost and improving performance across your cloud infrastructure.\n Automation rules let you automatically apply optimization recommendations on a recurring schedule when they match your criteria. You can set criteria like AWS Region to target specific geographies and Resource Tags to distinguish between production and development workloads. Configure rules to run daily, weekly, or monthly, and AWS Compute Optimizer will continuously evaluate new recommendations against your criteria. A new dashboard allows you to summarize automation events over time, examine detailed step history, and estimate savings achieved. If you need to reverse an action, you can do so directly from the same dashboard. AWS Compute Optimizer automation rules are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (São Paulo). To get started, navigate to the new Automation section in the AWS Compute Optimizer console, visit the AWS Compute Optimizer user guide documentation, or read the announcement blog to learn more.
AWS Organizations now supports upgrade rollout policy for Amazon Aurora and Amazon RDS
Today, AWS Organizations announces support for upgrade rollout policy, a new capability that helps customers stagger automatic upgrades across their Amazon Aurora (MySQL-Compatible Edition and PostgreSQL-Compatible Edition) and Amazon Relational Database Service (Amazon RDS) including RDS for MySQL, RDS for PostgreSQL, RDS for MariaDB, RDS for SQL Server, RDS for Oracle, and RDS for Db2 databases. This capability eliminates the operational overhead of coordinating automatic minor version upgrades either manually or through custom tools across hundreds of resources and accounts, while giving customers peace of mind by ensuring upgrades are first tested in less critical environments before being rolled out to production.\n With upgrade rollout policy, you can define upgrade sequences using simple orders (first, second, last) applied through account-level policies or resource tags. When new minor versions become eligible for automatic upgrade, the policy ensures upgrades start with development environments, allowing you to validate changes before proceeding to more critical environments. AWS Health notifications between phases and built-in validation periods help you monitor progress and ensure stability throughout the upgrade process. You can also disable automatic progression at any time if issues are detected, giving you complete control over the upgrade journey. This feature is available in all AWS commercial Regions and AWS GovCloud (US) Regions, supporting automatic minor version upgrades for Amazon Aurora and Amazon RDS database engines. You can manage upgrade policies using the AWS Management Console, AWS CLI, AWS SDKs, AWS CloudFormation, or AWS CDK. For Amazon RDS for Oracle, the upgrade rollout policy supports automatic minor version upgrades for engine versions released after January 2026. To learn more about automatic minor version upgrades, see the Amazon RDS and Aurora user guide. For more information about upgrade rollout policy, see Managing organization policies with AWS Organizations (Upgrade rollout policy).
Amazon EKS introduces Provisioned Control Plane
Today, Amazon Elastic Kubernetes Service (EKS) introduced Provisioned Control Plane, a new feature that gives you the ability to select your cluster’s control plane capacity to ensure predictable, high performance for the most demanding workloads. With Provisioned Control Plane, you can pre-provision the desired control plane capacity from a set of well-defined scaling tiers, ensuring the control plane is always ready to handle traffic spikes or unpredictable bursts. These new scaling tiers unlock significantly higher cluster performance and scalability, allowing you to run ultra-scale workloads in a single cluster.\n Provisioned Control Plane ensures your cluster’s control plane is ready to support workloads that require minimal latency and high performance during anticipated high-demand events like product launches, holiday sales, or major sporting and entertainment events. It also ensures consistent control plane performance across development, staging, production, and disaster recovery environments, so the behavior you observe during testing accurately reflects what you’ll experience in production or during failover events. Finally, it enables you to run massive-scale workloads such as AI training/inference, high-performance computing, or large-scale data processing jobs that require thousands of worker nodes in a single cluster. To get started with Amazon EKS Provisioned Control Plane, use the EKS APIs, AWS Console, or infrastructure as code tooling to enable it in a new or existing EKS cluster. To learn more about EKS Provisioned Control Plane , visit the EKS Provisioned Control plane documentation and EKS pricing page.
Amazon RDS for SQL Server now supports Resource Governor
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports resource governor, a Microsoft SQL Server feature that enables customers to optimize database performance by managing how different workloads consume compute resources. Customers can use resource governor on RDS for SQL Server Enterprise Edition database instances to prevent resource-intensive queries from impacting critical workloads, implement predictable performance in multi-tenant environments, and efficiently manage resource allocation during peak usage periods.\n RDS for SQL Server provides stored procedures that allow customers to implement resource governor configurations such as resource pools, workload groups, and classifier functions. Using these features, customers can allocate and control CPU, memory, and I/O resources for different database workloads within a single RDS for SQL Server instance. For more information about configuring and using resource governor, refer to the Amazon RDS for SQL Server User Guide. Resource governor is available with SQL Server Enterprise Edition in all AWS Regions where Amazon RDS for SQL Server is available.
Announcing notebooks with a built-in AI agent in Amazon SageMaker
Amazon SageMaker introduces a new notebook experience that provides data and AI teams a high-performance, serverless programming environment for analytics and machine learning (ML) jobs. This helps customers quickly get started working with data without pre-provisioning data processing infrastructure. The new notebook gives data engineers, analysts, and data scientists one place to perform SQL queries, execute Python code, process large-scale data jobs, run ML workloads and create visualizations. A built-in AI agent accelerates development by generating code and SQL statements from natural language prompts while it guides users through their tasks. The notebook is backed by Amazon Athena for Apache Spark to deliver high-performance results, scaling from interactive SQL queries to petabyte-scale data processing. It’s available in the new one-click onboarding experience for Amazon SageMaker Unified Studio.\n Data engineers, analysts, and data scientists can flexibly combine SQL, Python, and natural language within a single interactive workspace. This removes the need to switch between different tools based on your workload. For example, you can start with SQL queries to explore your data, use Python for advanced analytics or to build ML models, or use natural language prompts to generate code automatically using the built-in AI agent. To get started, sign in to the console, find SageMaker, open SageMaker Unified Studio, and go to “Notebooks” in the navigation.
You can use the SageMaker notebook feature in the following Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney).
To learn more, read the AWS News Blog or see SageMaker documentation.
Amazon Route 53 DNS service adds support for IPv6 API service endpoint
Starting today, Amazon Route 53 supports dual stack for the Route 53 DNS service API endpoint at route53.global.api.aws, enabling you to connect from Internet Protocol Version 6 (IPv6), Internet Protocol Version 4 (IPv4), or dual stack clients. The existing Route 53 DNS service IPv4 API endpoint will remain available for backwards compatibility.\n Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service that allows customers to register a domain, setup DNS records corresponding to your infrastructure, perform global traffic routing using Traffic Flow, and use Route 53 health checks to monitor the health and performance of your applications and resources. Due to the continued growth of the internet, IPv4 address space is being exhausted and customers are transitioning to IPv6 addresses. Now, clients can connect via IPv6 to the Route 53 DNS service API endpoint, enabling organizations to meet compliance requirements and removing the added complexity of IP address translation between IPv4 and IPv6. Support for IPv6 on the Route 53 DNS service API endpoint is available in all Commercial Regions and available at no additional cost. You can get started with this feature through the AWS CLI or AWS Management Console. To learn more about which Route 53 features are accessible via the route53.amazon.aws service endpoint, visit this page and to learn more about the Route 53 DNS service, visit our documentation.
Amazon Athena launches auto-scaling solution for Capacity Reservations
Amazon Athena now offers an auto-scaling solution for Capacity Reservations that dynamically adjusts your reserved capacity based on workload demand. The solution uses AWS Step Functions to monitor utilization metrics and scale your Data Processing Units (DPUs) up or down according to the thresholds and limits you configure, helping you optimize costs while maintaining query performance and eliminating the need for manual capacity adjustments.\n You can customize scaling behavior by setting utilization thresholds, measurement frequency, and capacity limits to match your workload needs. The solution uses Step Functions to add or remove DPUs to any active Capacity Reservation based on capacity utilization metrics in Amazon CloudWatch. Capacity automatically scales up when utilization exceeds your high threshold and scales down when it falls below your low threshold - all while adhering to your defined limits. You can further customize the solution by modifying the Amazon CloudFormation template to fit your specific requirements. The auto-scaling solution for Athena Capacity Reservations is available in AWS Regions where Capacity Reservations is supported. To get started, see Automatically adjust capacity in the Athena user guide.
Amazon Connect now supports multi skill agent scheduling
Amazon Connect now enables you to optimize scheduling based on agent’s multiple specialized skills. You can now maximize agent utilization across multiple dimensions such as departments, languages, and customer tiers by intelligently matching agents with multiple skills to forecasted demand. You can now also preserve multi-skilled agents for high-value interactions when needed most. For example, bilingual agents can now be strategically scheduled to cover peak periods for high-value French language queues that frequently experience staffing shortages, while handling general inquiries during off-peak times.\n This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about multi skill agent scheduling, visit the blog and admin guide.
AWS Glue launches Amazon DynamoDB connector with Spark DataFrame support
AWS Glue now supports a new Amazon DynamoDB connector that works natively with Apache Spark DataFrames. This enhancement allows Spark developers to work directly with Spark DataFrames, to share code easily across AWS Glue, Amazon EMR, and other Spark environments.\n Previously, developers working with DynamoDB data in AWS Glue were required to use the Glue-specific DynamicFrame object. With this new connector, developers can now reuse their existing Spark DataFrame code with minimal modifications. This change streamlines the process of migrating jobs to AWS Glue and simplifies data pipeline development. Additionally, the connector unlocks access to the full range of Spark DataFrame operations and the latest performance optimizations. The new connector is available in all AWS Commercial Regions where AWS Glue is available. To get started, visit AWS Glue documentation.
Amazon CloudWatch Container Insights adds Sub-Minute GPU Metrics for Amazon EKS
Amazon CloudWatch Container Insights now supports collection of GPU metrics at sub-minute frequencies for AI and ML workloads running on Amazon EKS. Customers can configure the metric sample rate in seconds, enabling more granular monitoring of GPU resource utilization.\n This enhancement enables customers to effectively monitor GPU-intensive workloads that run for less than 60 seconds, such as ML inference jobs that consume GPU resources for short durations. By increasing the sampling frequency, customers can maintain detailed visibility into short-lived GPU workloads. Sub-minute GPU metrics are sent to CloudWatch once per minute. This granular monitoring helps customers optimize their GPU resource utilization, troubleshoot performance issues, and ensure efficient operation of their containerized GPU applications. Sub-Minute GPU metrics in Container Insights is available in all AWS Commercial Regions and the AWS GovCloud (US) Regions. To learn more about Sub-Minute GPU metrics in Container Insights, visit the NVIDIA GPU metrics page in the Amazon CloudWatch User Guide. Sub-Minute GPU metrics in Container Insights are available for no addition cost. For Container Insights pricing, see the Amazon CloudWatch Pricing Page.
AWS Control Tower introduces a controls-dedicated experience
AWS Control Tower offers the easiest way to manage and govern your environment with AWS managed controls. Starting today, customers can have direct access to these AWS managed controls without requiring a full Control Tower deployment. This new experience offers over 750 managed controls that customers can deploy within minutes while maintaining their existing account structure.\n AWS Control Tower v4.0 introduces direct access to Control Catalog, allowing customers to review available managed controls and deploy them into their existing AWS Organization. With this release, customers now have more flexibility and autonomy over their organizational structure, as Control Tower will no longer enforce a mandatory structure. Additionally, customers will have improved operations such as cleaner resource and permissions management and cost attribution due to the separation of S3 buckets and SNS notifications for the AWS Config and AWS CloudTrail integrations. This controls-focused experience is now available in all AWS Regions where AWS Control Tower is supported. For more information about this new capability see the AWS Control Tower User Guide or contact your AWS account team. For a full list of Regions where AWS Control Tower is available, see the AWS Region Table.
Amazon Lightsail expands blueprint selection with updated support for Nginx Blueprint
Amazon Lightsail now offers a new Nginx blueprint. This new blueprint has Instance Metadata Service Version 2 (IMDSv2) enforced by default, and supports IPv6-only instances. With just a few clicks, you can create a Lightsail virtual private server (VPS) of your preferred size that comes with Nginx preinstalled.\n With Lightsail, you can easily get started on the cloud by choosing a blueprint and an instance bundle to build your web application. Lightsail instance bundles include instances preinstalled with your preferred operating system, storage, and monthly data transfer allowance, giving you everything you need to get up and running quickly This new blueprint is now available in all AWS Regions where Lightsail is available. For more information on blueprints supported on Lightsail, see Lightsail documentation. For more information on pricing, or to get started with your free trial, click here.
Amazon ECR dual-stack endpoints now support AWS PrivateLink
Amazon Elastic Container Registry (ECR) announces AWS PrivateLink support for its dual-stack endpoints. This makes it easier to standardize on IPv6 and enhance your security posture.\n Previously, ECR announced IPv6 support for API and Docker/OCI requests via the new dual-stack endpoints. With these dual-stack endpoints, you can make requests from either an IPv4 or an IPv6 network. With today’s launch, you can now make requests to these dual-stack endpoints using AWS PrivateLink to limit all network traffic between your Amazon Virtual Private Cloud (VPC) and ECR to the Amazon network, thereby improving your security posture. This feature is generally available in all AWS commercial and AWS GovCloud (US) regions at no additional cost. To get started, visit ECR documentation.
AWS Glue supports AWS CloudFormation and AWS CDK for zero-ETL integrations
AWS Glue zero-ETL integrations now support AWS CloudFormation and AWS Cloud Development Kit (AWS CDK), through which you can create Zero-ETL integrations using infrastructure as code. Zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines.\n Using AWS Glue zero-ETL, you can ingest data from AWS DynamoDB or enterprise SaaS sources, including Salesforce, ServiceNow, SAP, and Zendesk, into Amazon Redshift, Amazon S3, and Amazon S3 Tables. CloudFormation and CDK support for these Glue zero-ETL integrations simplifies the way you can create, update, and manage zero-ETL integrations using infrastructure as code. With CloudFormation and CDK support, data engineering teams can now consistently deploy any zero-ETL integration across multiple AWS accounts while maintaining version control of their configurations. This feature is available in all AWS Regions where AWS Glue zero-ETL is currently available. To get started with the new AWS Glue zero-ETL infrastructure as code capabilities, visit the CloudFormation documentation for AWS Glue, CDK documentation, or the AWS Glue zero-ETL user guide.
Amazon EC2 Fleet adds new encryption attribute for instance type selection
Amazon EC2 Fleet now supports a new encryption attribute for Attribute-Based Instance Type Selection (ABIS). Customers can use the RequireEncryptionInTransit parameter to specifically launch instance types that support encryption-in-transit, in addition to specifying resource requirements like vCPU cores and memory.\n The new encryption attribute addresses critical compliance needs for customers who use VPC Encryption Controls in enforced mode and require all network traffic to be encrypted in transit. By combining encryption requirements with other instance attributes in ABIS, customers can achieve instance type diversification for better capacity fulfillment while meeting their security needs. Additionally, the GetInstanceTypesFromInstanceRequirements (GITFIR) allows you to preview which instance types you might be allocated based on your specified encryption requirements. This feature is available in all AWS commercial and AWS GovCloud (US) Regions. To get started, set the RequireEncryptionInTransit parameter to true in InstanceRequirements when calling the CreateFleet or GITFIR APIs. For more information, refer to the user guides for EC2 Fleet and GITFIR.
Announcing flexible AMI distribution capabilities for EC2 Image Builder
Amazon EC2 Image Builder now allows you to distribute existing Amazon Machine Images(AMIs), retry distributions, and define custom distribution workflows. Distribution workflows are a new workflow type that complements existing build and test workflows, enabling you to define sequential distribution steps such as AMI copy operations, wait-for-action checkpoints, and AMI attribute modifications.\n With enhanced distribution capabilities, you can now distribute an existing image to multiple regions and accounts without running a full Image Builder pipeline. Simply specify your AMI and distribution configuration, and Image Builder handles the copying and sharing process. Additionally, with distribution workflows, you can now customize distribution process by defining custom steps. For example, you can distribute AMIs to a test region first, add a wait-for-action step to pause for validation, and then continue distribution to production regions after approval. This provides the same step-level visibility and control you have with build and test workflows. These capabilities are available to all customers at no additional costs, in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions. You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder documentation.
Amazon SageMaker HyperPod now supports running IDEs and Notebooks to accelerate AI development
Amazon SageMaker HyperPod now supports IDEs and Notebooks, enabling AI developers to run JupyterLab, Code Editor, or connect local IDEs to run their interactive AI workloads directly on HyperPod clusters.\n AI developers can now run IDEs and notebooks on the same persistent HyperPod EKS clusters used for training and inference. This enables developers to leverage HyperPod’s scalable GPU capacity with familiar tools like HyperPod CLI, while sharing data across IDEs and training jobs through mounted file systems such as FSx, EFS, etc.. Administrators can maximize CPU/GPU investments through unified governance across IDEs, training, and inference workloads using HyperPod Task Governance. HyperPod Observability provides usage metrics including CPU, GPU, and memory consumption, enabling cost-efficient cluster utilization. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is currently available, excluding China and GovCloud (US) regions. To learn more, visit our documentation.
Oracle Database@AWS now supports AWS KMS integration with Oracle Transparent Data Encryption
Oracle Database@AWS is now integrated with AWS Key Management Service (KMS) to manage database encryption keys. KMS is an AWS managed service to create and control keys used to encrypt and sign data. With this integration, customers can now use KMS to encrypt Oracle Transparent Data Encryption (TDE) master keys in Oracle Database@AWS. This provides customers a consistent mechanism to create and control keys used for encrypting data in AWS, and meet security and compliance requirements.\n Thousands of customers use KMS to manage keys for encrypting their data in AWS. KMS provides robust key management and control through central policies and granular access, comprehensive logging and auditing via AWS CloudTrail, and automatic key rotation for enhanced security. By using KMS to encrypt Oracle TDE master keys, customers can get the same benefits for database encryption keys for Oracle Database@AWS, and apply consistent auditing and compliance procedures for data in AWS. AWS KMS integration with TDE is available in all AWS regions where Oracle Database@AWS are available. Other than standard AWS KMS pricing, there is no additional Oracle Database@AWS charge for the feature. To get started, see Oracle Database@AWS and documentation to use KMS.
Amazon Bedrock Data Automation now supports synchronous image processing
Amazon Bedrock Data Automation (BDA) now supports synchronous API processing for images, enabling you to receive structured insights from visual content with low latency. Synchronous processing for images complements the existing asynchronous API, giving you the flexibility to choose the right approach based on your application’s latency requirements.\n BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With synchronous image processing, you can build interactive experiences—such as social media platforms that moderate user-uploaded photos, e-commerce apps that identify products from customer images, or travel applications that recognize landmarks and provide contextual information. This eliminates polling or callback handling, simplifying your application architecture and reducing development complexity. Synchronous processing supports both Standard Output for common image analysis tasks like summarization and text extraction, and Custom Output using Blueprints for industry-specific field extraction. You now get the high-quality, structured results you expect from BDA with low-latency response times that enable more responsive user experiences. Amazon Bedrock Data Automation is available in 8 AWS regions: Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai), Asia Pacific (Sydney), US West (Oregon) and US East (N. Virginia), and AWS GovCloud (US-West) AWS Regions. To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with using Bedrock Data Automation, visit the Amazon Bedrock console.
AWS Application and Network Load Balancers Now Support Post-Quantum Key Exchange for TLS
AWS Application Load Balancers (ALB) and Network Load Balancers (NLB) now support post-quantum key exchange options for the Transport Layer Security (TLS) protocol. This opt-in feature introduces new TLS security policies with hybrid post-quantum key agreement, combining classical key exchange algorithms with post-quantum key encapsulation methods, including the standardized Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM) algorithm.\n Post-quantum TLS (PQ-TLS) security policies protect your data in transit against potential “Harvest Now, Decrypt Later” (HNDL) attacks, where adversaries collect encrypted data today with the intention to decrypt it once quantum computing capabilities mature. This quantum-resistant encryption ensures long-term security for your applications and data transmissions, future-proofing your infrastructure against emerging quantum computing threats. This feature is available for ALB and NLB in all AWS Commercial Regions, AWS GovCloud (US) Regions and AWS China Regions at no additional cost. To use this capability, you must explicitly update your existing ALB HTTPS listeners or NLB TLS listeners to use a PQ-TLS security policy, or select a PQ-TLS policy when creating new listeners through the AWS Management Console, CLI, API or SDK. You can monitor the use of classical or quantum-safe key exchange using ALB connection logs or NLB access logs. For more information, please visit ALB User Guide, NLB User Guide, and AWS Post-Quantum Cryptography documentation.
Announcing Amazon ECS Express Mode
Today, AWS announces Amazon Elastic Container Service (Amazon ECS) Express Mode, a new feature that empowers developers to rapidly launch containerized applications, including web applications and APIs. ECS Express Mode makes it easy to orchestrate and manage the cloud architecture for your application, while maintaining full control over your infrastructure resources.\n Amazon ECS Express Mode streamlines the deployment and management of containerized applications on AWS, allowing developers to focus on delivering business value through their containerized applications. Every Express Mode service automatically receives an AWS-provided domain name, making your application immediately accessible without additional configuration. Applications using ECS Express Mode incorporate AWS operational best practices, serve either public or private HTTPS requests, and scale in response to traffic patterns. Traffic is distributed through Application Load Balancer (ALB)s, and automatically consolidates up to 25 Express Mode services behind a single ALB when appropriate. ECS Express uses intelligent rule-based routing to maintain isolation between services while efficiently utilizing the ALB resource. All resources provisioned by ECS Express Mode remain fully accessible in your account, ensuring you never sacrifice control or flexibility. As your application requirements evolve, you can directly access and modify any infrastructure resource, leveraging the complete feature set of Amazon ECS and related services without disruption to your running applications. To get started just provide your container image, and ECS Express Mode handles the rest by deploying your application in Amazon ECS and auto-generating a URL. Amazon ECS Express Mode is available now in all AWS Regions at no additional charge. You pay only for the AWS resources created to run your application. To deploy a new ECS Express Mode service, use the Amazon ECS Console, SDK, CLI, CloudFormation, CDK and Terraform. For more information, see the AWS News blog, or the documentation.
Amazon API Gateway REST APIs now supports private integration with Application Load Balancer
Amazon API Gateway REST APIs now support direct private integration with Application Load Balancer (ALB), enabling inter-VPC connectivity to internal ALBs. This enhancement extends API Gateways existing VPC connectivity, providing you with more flexible and efficient architecture choices for your REST API implementations.\n This direct ALB integration delivers multiple advantages: reduced latency by eliminating the additional network hop previously required through Network Load Balancer, lower infrastructure costs through simplified architecture, and enhanced Layer 7 capabilities including HTTP/HTTPS health checks, advanced request-based routing, and native container service integration. You can still use API Gateway’s integration with Network Load Balancers for layer-4 connectivity. Amazon API Gateway private integration with ALB is available in all AWS GovCloud (US) regions and the following AWS commercial regions US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), Europe (Zurich), Israel (Tel Aviv), Middle East (Bahrain), Middle East (UAE), South America (São Paulo). For more information, visit the Amazon API Gateway documentation and blog post.
Amazon Lex extends wait & continue feature in 10 new languages
Amazon Lex now supports wait & continue functionality in 10 new languages, enabling more natural conversational experiences in Chinese, Japanese, Korean, Cantonese, Spanish, French, Italian, Portuguese, Catalan, and German. This feature allows deterministic voice and chat bots to pause while customers gather additional information, then seamlessly resume when ready. For example, when asked for payment details, customers can say “hold on a second” to retrieve their credit card, and the bot will wait before continuing.\n This feature is available in all AWS Regions where Amazon Lex operates. To learn more, visit the Amazon Lex documentation or explore the Amazon Connect website to learn how Amazon Connect and Amazon Lex deliver seamless end-customer self-service experiences.
AWS Security Token Service Now Supports Internet Protocol version 6 (IPv6)
AWS Security Token Service (STS) now supports Internet Protocol version 6 (IPv6) addresses via new dual-stack endpoints. You can connect to STS over the public internet using IPv6, IPv4, or dual-stack (both IPv4 and IPv6) clients. Dual-stack support is also available when you access STS endpoints privately from your Amazon Virtual Private Cloud (VPC) using AWS PrivateLink, allowing you to invoke STS APIs without traversing the public internet.\n Support for dual-stack STS endpoints is available in all AWS Commercial Regions, AWS GovCloud (US) Regions, and China Regions. To get started, configure your STS client to use the new dual-stack endpoints using the configuration instructions in the IAM user guide.
Announcing AWS Lambda Kafka event source mapping integration in Amazon MSK Console
AWS announces Lambda’s Kafka event source mapping (ESM) integration in the Amazon MSK Console, streamlining the process of connecting MSK topics to Lambda functions. This capability allows you to simply provide your topic and target function in the MSK Console while the integration handles ESM configuration automatically, enabling you to trigger Lambda functions from MSK topics without switching consoles.\n Customers use MSK as an event source for Lambda functions to build responsive event-driven Kafka applications. Previously, configuring MSK as an event source required navigating between MSK and Lambda consoles to provide parameters like cluster details, authentication method, and network configuration. The new integrated experience brings Lambda ESM configuration directly into the MSK Console with a simplified interface requiring only target function and topic name as mandatory fields. The integration handles ESM creation with optimized defaults for authentication and event polling configurations, and can automatically generate the required Lambda execution role permissions for MSK cluster access. To optimize latency and throughput, and to remove the need for networking setup, the integration uses Provisioned Mode for ESM as the recommended default. These improvements streamline MSK integration with Lambda and reduce configuration errors, enabling you to quickly get started with your MSK and Lambda applications. This feature is generally available in all AWS Commercial Regions where both Amazon MSK and AWS Lambda are available, except Asia Pacific (Thailand), Asia Pacific (Malaysia), Israel (Tel Aviv), Asia Pacific (Taipei), and Canada West (Calgary). You can configure Lambda’s Kafka event source mapping from the MSK Console by navigating to your MSK cluster and providing the topic, Lambda function, and optional fields under the Lambda integration tab. Standard Lambda pricing and MSK pricing applies. To learn more, read Lambda developer guide and MSK developer guide.
AWS Lambda announces new capabilities to optimize costs up to 90% for Provisioned mode for Kafka ESM
AWS Lambda announces new capabilities for Provisioned mode for Kafka event source mappings (ESMs) that allow you to group your Kafka ESMs and support higher density of event pollers, enabling you to optimize costs up to 90% for your Kafka ESMs. With these cost optimization capabilities, you can now use Provisioned mode for all your Kafka workloads, including those with lower throughput requirements, while benefiting from features like throughput controls, schema validation, filtering of Avro/Protobuf events, low-latency invocations, and enhanced error handling.\n Customers use Provisioned mode for Kafka ESM to fine-tune the throughput of the ESM by provisioning and auto-scaling polling resources called event pollers. Charges are calculated using a billing unit called Event Poller Unit (EPU). Each EPU supports up to 20 MB/s of throughput capacity, and a default of 4 event pollers per EPU. With this launch, each EPU automatically supports a default of 10 event pollers for low-throughput use cases, improving utilization of your EPU capacity. Additionally, you can now group multiple Kafka ESMs within the same Amazon VPC to share EPU capacity by configuring the new PollerGroupName parameter. With these enhancements, you can reduce your EPU costs up to 90% for your low throughput workloads. These optimizations enable you to maintain the performance benefits of Provisioned mode while significantly reducing costs for applications with varying throughput requirements. This feature is available in all AWS Commercial Regions where AWS Lambda’s Provisioned mode for Kafka ESM is available. Starting today, existing Provisioned mode for Kafka ESMs will automatically benefit from improved packing of low-throughput event pollers. You can implement ESM grouping through the Lambda ESM API, AWS Console, CLI, SDK, CloudFormation, and SAM by configuring the PollerGroupName parameter along with minimum and maximum event poller settings. For more information about these new capabilities and pricing details, visit the Lambda ESM documentation and AWS Lambda pricing.
Amazon WorkSpaces Applications now supports IPv6
Amazon WorkSpaces Applications now supports IPv6 for WorkSpaces Applications domains and external endpoints, allowing end users to connect to WorkSpaces Applications over IPv6 from IPv6 compatible devices (except SAML authentication). This helps you meet IPv6 compliance requirements and eliminates the need for expensive networking equipment to handle address translation between IPv4 and IPv6.\n The Internet’s growth is consuming IPv4 addresses quickly. WorkSpaces Applications, by supporting IPv6, assists customers in streamlining their network architecture. This support offers a much larger address space and removes the necessity to manage overlapping address spaces in their VPCs. Customers can now base their applications on IPv6, ensuring their infrastructure is future-ready and compatible with existing IPv4 systems via a fallback mechanism.
This feature is available at no additional cost in 16 AWS Regions, including US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Paris, Frankfurt, London, Ireland), Asia Pacific (Tokyo, Mumbai, Sydney, Seoul, Singapore), and South America (Sao Paulo) and AWS GovCloud (US-West, US-East). WorkSpaces Applications offers pay-as-you go pricing.
To get started with WorkSpaces Applications, see Getting Started with Amazon WorkSpaces Applications. To enable this feature for your users, you must use the latest WorkSpaces Applications client for Windows, macOS or directly through web access. To learn more about the feature, please refer to the service documentation.
Amazon CloudWatch Application Signals adds GitHub Action and MCP server improvements
AWS announces the general availability of a new GitHub Action and improvements to CloudWatch Application Signals MCP server that bring application observability into developer tools, making troubleshooting issues faster and more convenient. Previously, developers had to leave GitHub to triage production issues, look up trace data, and ensure observability coverage, often switching between consoles, dashboards, and source code. Starting today, Application observability for AWS GitHub Action helps you catch breaching SLOs or critical service errors, in GitHub workflows. In addition, now you can use the CloudWatch Application Signals MCP server in AI coding agents such as Kiro to identify the exact file, function, and line of code responsible for latency, errors, or SLO violations. Furthermore, you can get instrumentation guidance that ensures comprehensive observability coverage.\n With this new GitHub Action, developers can mention @awsapm in GitHub Issues with prompts like “Why is my checkout service experiencing high latency?” and receive intelligent, observability-based responses without switching between consoles, saving time and effort. In addition, with improvements in CloudWatch Application Signals MCP server, developers can now ask questions like “Which line of code caused the latency spike in my service?”. Furthermore, when instrumentation is missing, the MCP server can modify infrastructure-as-code (e.g., CDK, Terraform) to help teams set up OTel-based application performance monitoring for ECS, EKS, Lambda, and EC2 without requiring coding effort. Together, these features bring observability into development workflows, reduce context switching, and power intelligent, agent-assisted debugging from code to production. To get started, visit Application Observability for AWS GitHub Action documentation and the CloudWatch Application Signals MCP server documentation.
AWS Network Firewall now supports flexible cost allocation via Transit Gateway
AWS Network Firewall now supports flexible cost allocation through AWS Transit Gateway native attachments, enabling you to automatically distribute data processing costs across different AWS accounts. Customers can create metering policies to apply data processing charges based on their organization’s chargeback requirements instead of consolidating all expenses in the firewall owner account.\n This capability helps security and network teams better manage centralized firewall costs by distributing charges to application teams based on actual usage. Organizations can now maintain centralized security controls while automatically allocating inspection costs to the appropriate business units or application owners, eliminating the need for custom cost management solutions. Flexible cost allocation is available in all AWS Commercial Regions and Amazon China Regions where both AWS Network Firewall and Transit Gateway attachments are supported. There are no additional charges for using this attachment or flexible cost allocation beyond standard pricing of AWS Network Firewall and AWS Transit Gateway. To learn more, visit the AWS Network Firewall service documentation.
Amazon CloudWatch Introduces In-Console Agent Management on EC2
Amazon CloudWatch now offers an in-console experience for automated installation and configuration of the Amazon CloudWatch agent on EC2 instances. Amazon CloudWatch agent is used by developers and SREs to collect infrastructure and application metrics, logs, and traces from EC2 and send them to CloudWatch and AWS X-Ray. This new experience provides visibility into agent status across your EC2 fleet, performs automatic detection of supported workloads, and leverages CloudWatch observability solutions to recommend monitoring configurations based on detected workloads.\n Customers can now deploy the CloudWatch agent through one-click installation to individual instances or by creating tag-based policies for automated fleet-wide management. The automated policies ensure newly launched instances, including those created through auto-scaling, are automatically configured with the appropriate monitoring settings. By simplifying agent deployment and providing intelligent configuration recommendations, customers can ensure consistent monitoring across their environment while reducing setup time from hours to minutes. Amazon CloudWatch agent is available in the following AWS regions: Europe (Stockholm), Asia Pacific (Mumbai), Europe (Paris), US East (Ohio), Europe (Ireland), Europe (Frankfurt), South America (Sao Paulo), US East (N. Virginia), Asia Pacific (Seoul), Asia Pacific (Tokyo), US West (Oregon), US West (N. California), Asia Pacific (Singapore), Asia Pacific (Sydney), and Canada (Central). To get starting with Amazon CloudWatch agent in the CloudWatch console, see Installing the CloudWatch agent in the Amazon CloudWatch User Guide.
AWS Security Incident Response now offers metered pricing with free tier
Today, AWS Security Incident Response announces a new metered pricing model that charges customers based on the number of security findings ingested, making automated security incident response capabilities and expert guidance from the AWS Customer Incident Response Team (CIRT) more flexible and scalable for organizations of all sizes.\n The new pricing model introduces a free tier covering the first 10,000 findings per month, allowing security teams to explore and validate the service’s value at no cost. Customers pay $0.000676 per finding after the free tier, with tiered discounts that reduce rates as volume increases. This consumption-based approach enables customers to scale their security incident response capabilities as their needs evolve, without upfront commitments or minimum fees. Customers can monitor the number of monthly findings through Amazon CloudWatch at no additional cost, making it easy to track usage against the free tier and any applicable charges. The new pricing model automatically applies to all AWS Regions where Security Incident Response is available starting November 21, 2025, requiring no action from customers. To learn more, visit the Security Incident Response pricing page.
Amazon ECS and Amazon EKS now offer enhanced AI-powered troubleshooting in the Console
Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS) now offer enhanced AI-powered troubleshooting experiences in the AWS Management Console through Amazon Q Developer. The new AI-powered experiences appear contextually alongside error or status messages in the console, helping customers root cause issues and view mitigation suggestions with a single click.\n In the ECS Console, customers can use the new “Inspect with Amazon Q” button to troubleshoot issues such as failed tasks, container health check failures, or deployment rollbacks. Simply click the status reason on task details, task definition details, or deployment details page, and click “Inspect with Amazon Q” from the popover to start troubleshooting with context from the issue provided to the agent for you. Once clicked, Amazon Q automatically uses appropriate AI tools to analyze the issue, gather the relevant logs and metrics, help you understand the root cause, and recommend mitigation actions. The Amazon EKS console integrates Amazon Q throughout the observability dashboard, enabling you to inspect and troubleshoot cluster, control plane, and node health issues with contextual AI assistance. Simply click “Inspect with Amazon Q” directly from tables that outline issues, or click on an issue to view details and then select “Inspect with Amazon Q” to begin your investigation. The Q-powered experience provides deeper understanding of cluster-level insights, such as upgrade insights, helping you proactively identify and mitigate potential issues. Amazon Q also streamlines workload troubleshooting by helping you investigate Kubernetes events on pods that indicate issues, accelerating root cause identification and resolution. Amazon Q integration in the Amazon ECS and Amazon EKS consoles is now available in all AWS commercial regions. To learn more, visit the ECS developer guide and EKS user guide.
Amazon Simple Email Service is now available in two new AWS Regions
Amazon Simple Email Service (Amazon SES) is now available in the Asia Pacific (Malaysia), Canada West (Calgary) Regions. Customers can now use these new Regions to leverage Amazon SES to send emails and, if needed, to help manage data sovereignty requirements.\n Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this page. With this launch, Amazon SES is available in 29 AWS Regions globally: US East (Virginia, Ohio), US West (N. California, Oregon), AWS GovCloud (US-West, US-East), Asia Pacific (Osaka, Mumbai, Hyderabad, Sydney, Singapore, Seoul, Tokyo, Jakarta, Malaysia), Canada (Central, Calgary), Europe (Ireland, Frankfurt, London, Paris, Stockholm, Milan, Zurich), Israel (Tel Aviv), Middle East (Bahrain, UAE), South America (São Paulo), and Africa (Cape Town). For a complete list of all of the regional endpoints for Amazon SES, see AWS Service Endpoints in the AWS General Reference.
AWS Backup now supports Amazon FSx Intelligent-Tiering
AWS Backup now supports Amazon FSx Intelligent-Tiering, a storage class which delivers fully elastic file storage that automatically scales up and down with your workloads.\n The FSx Intelligent-Tiering storage class is available for FSx for Lustre and Amazon FSx for OpenZFS file systems and combines performance, pay-for-what-you-use elasticity, with automated cost optimization in a single solution. With this integration, you can now protect OpenZFS and Lustre file systems using FSx Intelligent-Tiering through AWS Backup’s centralized backup management capabilities. Customers with existing backup plans for Amazon FSx do not need to make any changes, as all scheduled backups will continue to work as expected.
AWS Backup support is available in all AWS Regons where FSx Intelligent Tiering is available. For a full list of supported Regions see region availability documentation for Amazon FSx for OpenZFS and Amazon FSx for Lustre.
To learn more about AWS Backup for Amazon FSx, visit the AWS Backup product page, technical documentation, and pricing page. For more information on the AWS Backup features available across AWS Regions, see AWS Backup documentation. To get started, visit the AWS Backup console.
Amazon CloudWatch Container Insights now supports Neuron UltraServers on Amazon EKS
Amazon CloudWatch Container Insights now supports Neuron UltraServers on Amazon EKS, providing enhanced observability for customers running large-scale, high-performance machine learning workloads on multi-instance nodes. This new capability enables data scientists and ML engineers to efficiently monitor and troubleshoot their containerized ML applications, offering aggregated metrics and simplified management across Neuron UltraServer groups.\n Neuron UltraServers combine multiple EC2 instances into a single logical server unit, optimized for machine learning workloads using AWS Trainium and Inferentia accelerators. Container Insights, a monitoring and diagnostics feature in Amazon CloudWatch, automatically collects metrics from containerized applications. With this launch, Container Insights introduces a new filter specifically for UltraServers in EKS environments. You can now select an UltraServer ID to view new aggregate metrics across all instances within that server, replacing the need to monitor individual instances separately. In addition to per-instance metrics, you can now view consolidated performance data for the entire UltraServer group, streamlining the monitoring of ML workloads running on AWS Neuron.
Amazon CloudWatch Container Insights is available in all commercial AWS Regions, and the AWS GovCloud (US).
To get started, see AWS Neuron metrics for AWS Trainium and AWS Inferentia in the Amazon CloudWatch User Guide
Amazon Aurora DSQL now provides an integrated query editor in the AWS Management Console
Amazon Aurora DSQL now provides an integrated query editor for browser-based SQL access. With this launch, customers can securely connect to their Aurora DSQL clusters and run SQL queries directly from the AWS Management Console, without installing or configuring external clients. This capability helps developers, analysts, and data engineers start querying within seconds of cluster creation, accelerating time-to-value and simplifying database interactions.\n The Aurora DSQL query editor provides an intuitive workspace with built-in syntax highlighting, auto-completion, and intelligent code assistance. You can quickly explore schema objects, develop and execute SQL queries, and view results, all within a single interface. This unified experience streamlines data exploration and analysis, making it easier for users to get started with Aurora DSQL. Aurora DSQL Console Query Editor is available in all Regions where Aurora DSQL is available. Try it out today in the AWS Management Console, and visit the Aurora DSQL Query Editor documentation to learn more.
AWS Application Load Balancer now supports Health Check Logs
AWS Application Load Balancers (ALB) now supports Health Check Logs that allows you to send detailed target health check log data directly to your designated Amazon S3 bucket. This optional feature captures comprehensive target health check status, timestamp, target identification data, and failure reasons.\n Health Check Logs provide complete visibility into target health status with precise failure diagnostics, enabling faster troubleshooting without contacting AWS Support. You can analyze target’s health patterns over time, determine exactly why instances were marked unhealthy, and significantly reduce mean time to resolution for target health investigations. Logs are automatically delivered to your S3 bucket every 5 minutes with no additional charges beyond standard S3 storage costs. This feature is available in all AWS Commercial Regions, AWS GovCloud (US) Regions and AWS China Regions where Application Load Balancer is offered. You can enable Health Check Logs through the AWS Management Console, AWS CLI, or programmatically using the AWS SDK. Learn more about Health Check Logs for ALBs in the AWS documentation.
Amazon ECS Managed Instances now available in AWS GovCloud (US) Regions
Amazon Elastic Container Service (Amazon ECS) Managed Instances is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. ECS Managed Instances is a fully managed compute option designed to eliminate infrastructure management overhead while giving you access to the full capabilities of Amazon EC2. By offloading infrastructure operations to AWS, you get the application performance you want and the simplicity you need while reducing your total cost of ownership.\n Managed Instances dynamically scales EC2 instances to match your workload requirements and continuously optimizes task placement to reduce infrastructure costs. It also enhances your security posture through regular security patching initiated every 14 days. You can simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances Capacity Provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer. To get started with ECS Managed Instances, use the AWS Console, Amazon ECS MCP Server, or your favorite infrastructure-as-code tooling to enable it in a new or existing Amazon ECS cluster. You will be charged for the management of compute provisioned, in addition to your regular Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.
Amazon Relational Database Service (Amazon RDS) for Oracle now offers Oracle Database Standard Edition 2 (SE2) License Included R7i and M7i instances in Asia Pacific (Taipei) region.\n With Amazon RDS for Oracle SE2 License Included instances, you do not need to purchase Oracle Database licenses. You simply launch Amazon RDS for Oracle instances through the AWS Management Console, AWS CLI, or AWS SDKs, and there are no separate license or support charges. Review the AWS blog Rethink Oracle Standard Edition Two on Amazon RDS for Oracle to explore how you can lower cost and simplify operations by using Amazon RDS Oracle SE2 License Included instances for your Oracle databases. To learn more about pricing and regional availability, see Amazon RDS for Oracle pricing.
AWS Transfer Family web apps now support VPC endpoints
AWS Transfer Family web apps now supports Virtual Private Cloud (VPC) endpoints, enabling private access to your web app at no additional charge. This allows your users to securely access and manage files in Amazon S3 through a web browser while maintaining all traffic within your VPC.\n Transfer Family web apps provide a simple and secure web interface for accessing your data in Amazon S3. With this launch, your workforce users can connect through your VPC directly, AWS Direct Connect, or VPN connections. This enables you to support internal use cases requiring strict security controls, such as regulated document workflows and sensitive data sharing, while leveraging the security controls and network configurations already defined in your VPC. You can manage access using security groups based on source IP addresses, implement subnet-level filtering through NACLs, and ensure all file transfers remain within your private network boundary, maintaining full visibility and control over all network traffic. VPC endpoints for web apps are available in select AWS Regions at no additional charge. To get started, visit the AWS Transfer Family console, or use AWS CLI/SDK. To learn more, visit the Transfer Family User Guide.
Amazon Quick Sight dashboard customization now includes tables and pivot tables
Amazon Quick Sight has expanded customization capabilities to include tables and pivot tables in dashboards. This update enables readers to personalize their data views by sorting, reordering, hiding/showing, and freezing columns—all without requiring updates from dashboard authors.\n These capabilities are especially valuable for teams that need to tailor dashboard views for different analytical needs and collaborate across departments. For example, sales managers can quickly sort by revenue to identify top performers, while finance teams can freeze account columns to maintain context in large datasets. These new customization features are now available in Amazon Quick Sight Enterprise Edition across all supported Amazon Quick Sight regions. Learn how to get started with these new customization features in our blog post.
AWS Blogs
AWS Japan Blog (Japanese)
- You, an infrastructure engineer, too! Let’s use Kiro for shell script development
- Recommended pair programming using Kiro
- What’s New — AWS Systems Manager for SAP Configuration Management: Automated SAP HANA Best Practice Verification
- Check before you join — Well-Architected and Cloud Optimization Session Guide at AWS re:Invent 2025
- SAP HANA DB HA Configuration Patching Automation Using SSM and NZdT
- Introducing Amazon MWAA Serverless
- How to customize how long Amazon Connect call recordings are stored
- The Complete Guide to the Re:Invent 2025 Cloud Financial Management Session: Points to keep in mind before attending
- How to avoid getting lost: Introducing Kiro’s checkpoint features
- Security and governance for using Kiro in an organization
AWS News Blog
- New one-click onboarding and notebooks with a built-in AI agent in Amazon SageMaker Unified Studio
- Build production-ready applications without infrastructure complexity using Amazon ECS Express Mode
- Introducing VPC encryption controls: Enforce encryption in transit within and across VPCs in a Region
- Introducing attribute-based access control for Amazon S3 general purpose buckets
AWS Cloud Financial Management
AWS Big Data Blog
AWS Compute Blog
- Enhancing API security with Amazon API Gateway TLS security policies
- Improving throughput of serverless streaming workloads for Kafka
- Build scalable REST APIs using Amazon API Gateway private integration with Application Load Balancer
- Serverless strategies for streaming LLM responses
AWS Contact Center
Containers
- Introducing the fully managed Amazon EKS MCP Server (preview)
- Accelerate container troubleshooting with the fully managed Amazon ECS MCP server (preview)
- Streamline container image signatures with Amazon ECR managed signing
- Guide to Amazon EKS and Kubernetes sessions at AWS re:Invent 2025
AWS Database Blog
AWS Developer Tools Blog
AWS DevOps & Developer Productivity Blog
- Introducing AWS CloudFormation Stack Refactoring Console Experience: Reorganize Your Infrastructure Without Disruption
- Take fine-grained control of your AWS CloudFormation StackSets Deployment with StackSet Dependencies
AWS for Industries
- Managing Sustainability Data for Digital Product Passports with Agentic AI
- Accelerate breast cancer treatment planning with agentic AI
Artificial Intelligence
- Streamline AI operations with the Multi-Provider Generative AI Gateway reference architecture
- Deploy geospatial agents with Foursquare Spatial H3 Hub and Amazon SageMaker AI
- How Wipro PARI accelerates PLC code generation using Amazon Bedrock
Networking & Content Delivery
- Introducing Flexible Cost Allocation for AWS Transit Gateway
- AWS Site-to-Site VPN and eero make remote connectivity for distributed sites simpler
AWS Quantum Technologies Blog
AWS Security Blog
- Practical steps to minimize key exposure using AWS Security Services
- Accelerate investigations with AWS Security Incident Response AI-powered capabilities
- The Agentic AI Security Scoping Matrix: A framework for securing autonomous AI systems