11/24/2025, 12:00:00 AM ~ 11/25/2025, 12:00:00 AM (UTC)
Recent Announcements
OpenSearch Service Enhances Log Analytics with New PPL Experience
Today, AWS announces enhanced log analytics capabilities in Amazon OpenSearch Service, making Piped Processing Language (PPL) and natural language the default experience in OpenSearch UI’s Observability workspace. This update combines proven pipeline syntax with simplified workflows to deliver an intuitive observability experience, helping customers analyze growing data volumes while controlling costs. The new experience includes 35+ new commands for deep analysis, faceted exploration, and natural language querying to help customers gain deeper insights across infrastructure, security, and business metrics.\n With this enhancement, customers can streamline their log analytics workflows using familiar pipeline syntax while leveraging advanced analytics capabilities. The solution includes enterprise-grade query capabilities, supporting advanced event correlation using natural language that help teams uncover meaningful patterns faster. Users can seamlessly move from query to visualization within a single interface, reducing mean time to detect and resolve issues. Admins can quickly stand up an end-to-end OpenTelemetry solution using OpenSearch’s Get Started workflow in the AWS console. The unified workflow includes out-of-the-box OpenSearch Ingestion pipelines for OpenTelemetry data, making it easier for teams to get started quickly. Amazon OpenSearch UI is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Europe (Zurich), South America (São Paulo), and Canada (Central). To learn more about the new OpenSearch log analytics experience, visit the OpenSearch Service observability documentation and start using these enhanced capabilities today in OpenSearch UI.
Amazon CloudFront announces support for mutual TLS authentication
Amazon CloudFront announces support for mutual TLS Authentication (mTLS), a security protocol that requires both the server and client to authenticate each other using X.509 certificates, enabling customers to validate client identities at CloudFront’s edge locations. Customers can now ensure only clients presenting trusted certificates can access their distributions, helping protect against unauthorized access and security threats.\n Previously, customers had to spend ongoing effort implementing and maintaining their own client access management solutions, leading to undifferentiated heavy lifting. Now with the support for mutual TLS, customers can easily validate client identities at the AWS edge before connections are established with their application servers or APIs. Example use cases include B2B secure API integrations for enterprises and client authentication for IoT. For B2B API security, enterprises can authenticate API requests from trusted third parties and partners using mutual TLS. For IoT use cases, enterprises can validate that devices are authorized to receive proprietary content such as firmware updates. Customers can leverage their existing third-party Certificate Authorities or AWS Private Certificate Authority to sign the X.509 certificates. With Mutual TLS, customers get the performance and scale benefits of CloudFront for workloads that require client authentication. Mutual TLS authentication is available to all CloudFront customers at no additional cost. Customers can configure mutual TLS with CloudFront using the AWS Management Console, CLI, SDK, CDK, and CloudFormation. For detailed implementation guidance and best practices, visit CloudFront Mutual TLS (viewer) documentation.
Amazon EC2 announces interruptible Capacity Reservations
Today, Amazon EC2 announces interruptible Capacity Reservations to help you better utilize your reserved capacity and save costs. On-Demand Capacity Reservations (ODCRs) help you reserve compute capacity in a specific Availability Zone for any duration. When ODCRs are not in use, you can now make them temporarily available as interruptible ODCRs, enabling other workloads within your organization to utilize them while preserving your ability to reclaim the capacity for critical operations.\n By repurposing unused capacity as interruptible ODCRs, workloads suitable for flexible, fault-tolerant operations—such as batch processing, data analysis, and machine learning training can benefit from temporarily available capacity. Reservation owners can reclaim their capacity at any time, while consumers of interruptible ODCRs will receive an interruption notice before termination to allow for graceful shutdown or checkpointing before. Interruptible ODCRs are now available at no additional cost to all Capacity Reservations customers. Refer to the AWS Capabilities by Region website for the feature’s regional availability. CloudFormation support will be coming soon. For more details, please refer to the Capacity Reservations user guide.
AWS IoT Core now supports IoT thing registry data retrieval from IoT rules
AWS IoT Core announces a new capability to dynamically retrieve IoT thing registry data using an IoT rule, enhancing your ability to filter, enrich, and route IoT messages. Using the new get_registry_data() inline rule function, you can access IoT thing registry data, such as device attributes, device type, and group membership and leverage this information directly in IoT rules.\n For example, your rule can filter AWS IoT Core connectivity lifecycle events and then retrieve thing attributes (such as “test” or “production” device) to inform routing of lifecycle events to different endpoints for downstream processing. You can also use this feature to enrich or route IoT messages with registry data from other devices. For instance, you can add a sensor’s threshold temperature from IoT thing registry to the messages relayed by its gateway. To get started, connect your devices to AWS IoT Core and store your IoT device data in IoT thing registry. You can then use IoT rules to retrieve your registry data. This capability is available in all AWS regions where AWS IoT Core is present. For more information refer to the developer guide and API documentation.
AWS Elemental MediaTailor now supports HLS Interstitials for live streams
AWS Elemental MediaTailor now supports HTTP Live Streaming (HLS) Interstitials for live streams, enabling broadcasters and streaming service providers to deliver seamless, personalized ad experiences across a wide range of modern video players. This capability allows customers to insert interstitial advertisements and promotions directly into live streams using the HLS Interstitials specification (RFC 8216), which is natively supported by popular players including HLS.js, Shaka Player, Bitmovin Player, and Apple devices running iOS 16.4, iPadOS 16.4, tvOS 16.4, and later.\n With HLS Interstitials, MediaTailor automatically generates the necessary metadata tags (Interstitial class EXT-X-DATERANGE with X-ASSET-LIST attributes) that signal to client players when and how to play interstitial content. This approach eliminates the need for custom player-side stitching logic, reducing development complexity and ensuring consistent playback behavior. The feature integrates with MediaTailor’s existing server-side ad insertion (SSAI) capabilities, delivering frame-accurate transitions with no buffering between content and interstitials. Server-side beaconing continues to work with HLS Interstitials, ensuring ad tracking and measurement workflows remain intact. HLS Interstitials for live streams is particularly valuable for sports broadcasts, live news, and event streaming where precise ad timing and minimal latency are critical. The feature supports pre-roll and mid-roll insertion, giving customers flexibility in how they monetize their live content. This launch complements MediaTailor’s existing HLS Interstitials support for VOD, rounding out support across Linear, Live, FAST, and VOD workflows. MediaTailor makes it easy to test and deploy—customers can rapidly enable or disable HLS Interstitials with a simple query parameter on the multi-variant manifest request, providing per playback session control without changing the underlying MediaTailor configuration. AWS Elemental MediaTailor HLS Interstitials for live streams is available today in all AWS Regions where MediaTailor operates. You pay only for the features you use, with no upfront commitments. To learn more and get started, visit the AWS Elemental MediaTailor documentation and the HLS Interstitials implementation guide.
Amazon Redshift now supports federated permissions across multi-warehouse architectures
Amazon Redshift now supports federated permissions across multi-warehouse architectures\n Amazon Redshift now supports federated permissions, which simplify permissions management across multiple Redshift data warehouses. Customers are adopting multi-warehouse architectures to scale and isolate workloads and are looking for simplified, consistent permissions management across warehouses. With Redshift federated permissions, you define data permissions once from any Redshift warehouse and automatically enforce them across all warehouses in the account. Amazon Redshift warehouses with federated permissions are auto-mounted in every Redshift warehouse, and you can use existing workforce identities with AWS IAM Identity Center or use existing IAM roles to query data across warehouses. Regardless of which warehouse is used for querying, row-level, column-level, and masking controls always apply automatically, delivering fine-grained access compliance. You can get started by registering a Redshift Serverless namespace or Redshift provisioned cluster with AWS Glue Data Catalog and start querying across warehouses using Redshift Query Editor V2, or any supported SQL client. You get horizontal scalability with multiple warehouses by allowing you to add new warehouses without increasing governance complexity, as new warehouses automatically enforce permission policies and analysts immediately see all databases from registered warehouses. Amazon Redshift federated permissions is available at no additional cost in supported AWS regions. To learn more, visit the Amazon Redshift documentation.
Amazon U7i instances now available in Asia Pacific (Jakarta) Region
Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the Asia Pacific (Jakarta) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.\n U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.
Amazon Connect flow modules now support custom inputs, outputs, and version management
Amazon Connect flow modules now support custom inputs, outputs, and branches, along with version and alias management. With this launch, you can now define flexible parameters for your reusable flow modules to math your specific business logic. For example, you can create an authentication module that accepts a phone number and PIN as inputs, then returns the customer name and authentication status as outputs with branches such as “authenticated” or “not authenticated”. All parameters are customizable to meet your specific needs.\n Additionally, advanced versioning and aliasing capabilities allow you to manage module updates more seamlessly. You can create immutable version snapshots and map aliases to specific versions. When you update an alias to point to a new version, all flows using that module automatically reference the updated version. These new features make flow modules more powerful and reusable, allowing you to build and maintain flows more efficiently. To learn more about these feature, see the Amazon Connect Administrator Guide. This feature is available in all AWS regions that offers Amazon Connect. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.
AWS Glue announces catalog federation for remote Apache Iceberg catalogs
AWS Glue announces the general availability of catalog federation for remote Iceberg catalogs. This capability provides direct and secure access to Iceberg tables stored in Amazon S3 and cataloged in remote catalogs using AWS analytics engines.\n With catalog federation, you can federate to remote Iceberg catalogs and query remote Iceberg tables using your preferred AWS analytics engines, without moving or copying tables. It synchronizes metadata real-time across AWS Glue Data Catalog and remote catalogs when data teams query remote tables, which means that query results are always completely up-to-date. You can now choose the best price-performance for your workloads when analyzing remote Iceberg tables using your preferred AWS analytics engines, while maintaining consistent security controls when discovering or querying data. Catalog federation is supported by a wide variety of analytics engines, including Amazon Redshift, Amazon EMR, Amazon Athena, AWS Glue, third-party engines like Apache Spark, and Amazon SageMaker with the serverless notebooks. Catalog federation uses AWS Lake Formation for access controls, allowing you to use fine-grained access controls, cross-account sharing, and trusted identity propagation when sharing remote catalog tables with other data consumers. Catalog federation integrates with catalog implementations that support the Iceberg REST specifications. Catalog federation is available in Lake Formation console and using AWS Glue and Lake Formation SDKs and APIs. This feature is generally available in all AWS commercial regions where AWS Glue and Lake Formation are available. With just a few clicks in the console, you can federate to remote catalogs, discover its databases and tables, grant permissions to access table data, and query remote Iceberg tables using AWS analytics engines. To learn more, visit the documentation.
Claude Opus 4.5 now available in Amazon Bedrock
Customers can now use Claude Opus 4.5 in Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models from leading AI companies. Opus 4.5 is Anthropic’s newest model, setting new standards across coding, agentic workflows, computer use, and office tasks while making Opus-level intelligence accessible at one-third the cost.\n Opus 4.5 excels at professional software engineering tasks, achieving state-of-the-art performance on SWE-bench. The model handles ambiguity, reasons about tradeoffs and can figure out fixes for bugs that require reasoning across multiple systems. It can help transform multi-day team development projects into hours-long tasks with improved multilingual coding capabilities. This generation of Claude spans the full development lifecycle: Opus 4.5 for production code and lead agents, Sonnet 4.5 for rapid iteration and scaled user experiences, Haiku 4.5 for sub-agents and free-tier products. Beyond coding, the model powers agents that produce documents, spreadsheets, and presentations with consistency, professional polish, and domain awareness, making it ideal for finance and other precision-critical verticals. As Anthropic’s best vision model yet, it unlocks workflows that depend on complex visual interpretation and multi-step navigation. Through the Amazon Bedrock API, Opus 4.5 introduces two new capabilities: tool search and tool use examples. Together, these updates enable Claude to navigate large tool libraries and accurately execute complex tasks. A new effort parameter, available in beta, lets you control how much effort Claude allocates across thinking, tool calls, and responses to balance performance with latency, and cost. Claude Opus 4.5 is now available in Amazon Bedrock via global cross region inference in multiple locations. For the full list of available regions, refer to the documentation. To get started with the model in Amazon Bedrock, read the launch blog or visit the Amazon Bedrock console.
AWS Lambda announces enhanced error handling capabilities for Kafka event processing
AWS Lambda launches enhanced error handling capabilities for Amazon Managed Streaming for Apache Kafka (MSK) and self-managed Apache Kafka (SMK) event sources. These capabilities allow customers to build custom retry configurations, optimize retries of failed messages, and send failed events to a Kafka topic as an on-failure destination, enabling customers to build resilient Kafka workloads with robust error handling strategies.\n Customers use Kafka event source mappings (ESM) with their Lambda functions to build their mission-critical Kafka applications. Kafka ESM offers robust error handling of failed events by retrying events with exponential backoff, and retaining failed events in on-failure destinations like Amazon SQS, Amazon S3, Amazon SNS. However, customers need customized error handling to meet stringent business and performance requirements. With this launch, developers can now exercise precise control over failed event processing and leverage Kafka topics as an additional on-failure destination when using Provisioned mode for Kafka ESM. Customers can now define specific retry limits and time boundaries for retry, automatically discarding failed records beyond these limits to customer-specified destination. They can now also set automatic retries of failed records in the batch and enhance their function code to report individual failed messages, optimizing the retry process. This feature is available in all AWS Commercial Regions where AWS Lambda’s Provisioned mode for Kafka ESM is available. To enable these capabilities, provide configuration parameters for your Kafka ESM in the ESM API, AWS Console, and AWS CLI. To learn more, read the Lambda ESM documentation and AWS Lambda pricing.
Amazon MSK Replicator is now available in five additional AWS Regions
You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in five additional AWS Regions: Asia Pacific (Thailand), Mexico (Central), Asia Pacific (Taipei), Canada West (Calgary), Europe (Spain).\n MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing. You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. To learn more, visit the MSK Replicator product page, pricing page, and documentation.
Amazon Quick Suite Embedded Chat is now available
Today, AWS announces the general availability of Amazon Quick Suite Embedded Chat, enabling you to embed Quick Suite’s conversational AI, which combines structured data and unstructured knowledge in a single conversation - directly into your applications, eliminating the need to build conversational interfaces, orchestration logic, or data access layers from scratch.\n Quick Suite Embedded Chat solves a fundamental problem: users want answers where they work, not in another tool. Whether in a CRM, support console, or analytics portal, they need instant, contextual responses. Most conversational tools excel at either structured data or documents, analytics or knowledge bases, answering questions or performing actions—rarely all of the above. Quick Suite closes this gap. Now, users can reference a KPI, pull details from a file, check customer feedback, and trigger actions in one continuous conversation without leaving the embedded chat. Embedded Chat brings this unified experience into your applications with simple integration, either through 1-click embedding or through API-based iframes for registered users with your existing authentication. You can connect your Agentic Chat to your data through connectors to search SharePoint, websites, send Slack messages, or create Jira tasks and customize the Agent with your brand colors, communication style, and personalized greetings. Security always stays under your control as you choose what the agent accesses and explicitly scope all actions. Quick Suite Embedded Chat is available the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland), and we’ll expand availability to additional AWS Regions over the coming months. There is no additional cost for Quick Suite Embedded Chat. Existing Quick Suite pricing is available here. To learn more, see Embedding Amazon Quick Suite launch blog. To get started with Amazon Quick Suite, visit the Amazon Quick Suite product page.
Amazon OpenSearch Service now supports OpenSearch version 3.3
You can now run OpenSearch version 3.3 in Amazon OpenSearch Service. OpenSearch 3.3 introduces several improvements in areas like search performance, observability and new functionality to make agentic AI integrations simpler and more powerful.\n This launch includes several improvements in vector search capabilities. First, with agentic search, you can now achieve precise search results using natural language inputs without the need to construct complex domain-specific language (DSL) queries. Second, batch processing for semantic highlighter improves performance by reducing overhead latency and improving GPU utilization. Finally, enhancements to Neural Search plugin make semantic search more efficient and provide optimization options for your specific data, performance, and relevance needs.
This launch also introduces support for Apache Calcite as default query engine for PPL that delivers optimization capabilities, improvements to query processing efficiency, and an extensive library of new PPL commands and functions. Additionally, this launch includes enhancements to the approximation framework that improve the responsiveness of paginated search results, real-time dashboards, and applications requiring deep pagination through large time-series or numeric datasets. Finally, workload management plugin now allows you to group search traffic and isolate network resources. This prevents specific requests from overusing network resources and offers tenant-level isolation.
For information on upgrading to OpenSearch 3.3, please see the documentation. OpenSearch 3.3 is now available in all AWS Regions where Amazon OpenSearch Service is available.
Amazon Aurora PostgreSQL introduces dynamic data masking
Amazon Aurora PostgreSQL-Compatible Edition now supports dynamic data masking through the new pg_columnmask extension, allowing you to simplify the protection of sensitive data in your database. pg_columnmask extends Aurora’s security capabilities by enabling column-level protection that complements PostgreSQL’s native row-level security and column level grants. Using pg_columnmask, you can control access to sensitive data through SQL-based masking policies and define how data appears to users at query time based on their roles, helping you comply with data privacy regulations like GDPR, HIPAA, and PCI DSS.\n With pg_columnmask, you can create flexible masking policies using built-in or user-defined functions. You can completely hide information, replace partial values with wildcards, or define custom masking approaches. Further, you can apply multiple masking policies to a single column and control their precedence using weights. pg_columnmask helps protect data in complex queries with WHERE, JOIN, ORDER BY, or GROUP BY clauses. Data is masked at the database level during query processing, leaving stored data unmodified.
pg_columnmask is available for Aurora PostgreSQL version 16.10 and higher, and 17.6 and higher in all AWS Regions where Aurora PostgreSQL is available. To learn more, review our blog post and visit technical documentation.
Amazon CloudFront integrates with VPC IPAM to support BYOIP
Amazon CloudFront now supports bringing your own IP addresses (BYOIP) for Anycast Static IPs via VPC IP Address Manager (IPAM). This capability enables network administrators to use their own public IPv4 address pools with CloudFront distributions, simplifying IP address management across AWS’s global infrastructure.\n CloudFront typically uses rotating IP addresses to serve traffic. CloudFront Anycast Static IPs enables customers to provide a dedicated list of IP addresses to partners and customers, enhancing security and simplifying network management. Previously, customers implementing Anycast Static IPs received AWS-provided static IP addresses for their workloads. With IPAM’s unified interface, customers can now create dedicated IP address pools using BYOIP and assign them to CloudFront Anycast Static IP lists. Customers do not need to change the existing IP address space for their applications when they migrate to CloudFront, thus maintaining existing allow-lists and branding.
The feature is available within Amazon VPC IPAM in all commercial AWS Regions, excluding the AWS GovCloud (US) Regions, and China (Beijing, operated by Sinnet) and China (Ningxia, operated by NWCD). To learn more about CloudFront BYOIP feature, view the BYOIP CloudFront documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.
Amazon SageMaker HyperPod now supports NVIDIA Multi-Instance GPU (MIG) for generative AI tasks
Amazon SageMaker HyperPod now supports NVIDIA Multi-Instance GPU (MIG) technology, enabling administrators to partition a single GPU into multiple isolated GPUs. This capability allows administrators to maximize resource utilization by running diverse, small generative AI (GenAI) tasks simultaneously on GPU partitions while maintaining performance and task isolation.\n Administrators can choose either the easy-to-use configuration setup on the SageMaker HyperPod console or a custom setup approach to enable fine-grained, hardware-isolated resources for specific task requirements that don’t require full GPU capacity. They can also allocate compute quota to ensure fair and efficient distribution of GPU partitions across teams. With real-time performance metrics and resource utilization monitoring dashboard across GPU partitions, administrators gain visibility to optimize resource allocation. Data scientists can now accelerate time-to-market by scheduling lightweight inference tasks and running interactive notebooks in parallel on GPU partitions, eliminating wait times for full GPU availability. This capability is currently available for Amazon SageMaker HyperPod clusters using the EKS orchestrator across the following AWS Regions: US West (Oregon), US East (N.Virginia), US East (Ohio), US West (N. California), Canada (Central), South America (Sao Paulo), Europe (Stockholm), Europe (Spain), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Singapore). To learn more, visit SageMaker HyperPod webpage, and SageMaker HyperPod documentation.
AWS Blogs
AWS Japan Blog (Japanese)
AWS News Blog
AWS Cloud Operations Blog
AWS Big Data Blog
AWS Compute Blog
AWS Contact Center
Containers
AWS Database Blog
AWS DevOps & Developer Productivity Blog
AWS HPC Blog
AWS for Industries
- Solve customer identity fragmentation at scale with AWS Entity Resolution
- Optimize Your Retail Business for AI Search Platforms with AWS and Botify
- Revolutionize personalized radiology learning using AI and AWS
Artificial Intelligence
- Accelerate generative AI innovation in Canada with Amazon Bedrock cross-Region inference
- Power up your ML workflows with interactive IDEs on SageMaker HyperPod
- Claude Opus 4.5 now in Amazon Bedrock
- Deploy GPT-OSS models with Amazon Bedrock Custom Model Import
Networking & Content Delivery
- Trust goes both ways: Amazon CloudFront now supports viewer mTLS
- How AWS improves global connectivity via automated traffic engineering