5/14/2026, 12:00:00 AM ~ 5/15/2026, 12:00:00 AM (UTC)
Recent Announcements
Amazon CloudFront announces Passthrough Mode for mutual TLS (Viewer)
Amazon CloudFront now supports passthrough mode for viewer mutual TLS (mTLS) authentication, enabling customers to forward client certificates to their origin for validation without requiring CloudFront to perform certificate verification. Passthrough mode allows customers with existing mTLS implementations at their origins to use CloudFront without requiring to implement their validation logic at the edge.\n CloudFront viewer mTLS already supports required mode and optional mode, which offload client certificate authentication to CloudFront using trust stores. Passthrough mode is designed for customers to maintain their existing mTLS validation infrastructure at their origin without requiring any trust store configuration on CloudFront. In passthrough mode, CloudFront forwards every request to the origin along with the client’s full certificate chain. Caching is not performed, ensuring each request is authenticated end-to-end by your origin. Connection functions which allow you to inspect or transform connection-level data are still invoked, enabling you to process certificate data before it reaches the origin.
CloudFront Mutual TLS (viewer) in passthrough mode is available at no additional cost. To learn more, refer to the documentation for CloudFront Mutual TLS (Viewer).
Amazon Bedrock Introduces Advanced Prompt Optimization and Migration Tool
Customers spend days to weeks optimizing prompts and evaluating responses when they want to migrate to a new model or just get better performance out of their current model. They struggle with changing their prompts quickly and then testing them to prevent regressions and improve on underperforming tasks. These situations call for the same tool – a prompt optimizer with built-in evaluations. \n Today, Amazon Bedrock introduces Advanced Prompt Optimization, a new tool that allows customers to optimize their prompts for any model on Bedrock, while comparing their original prompts to their optimized prompts across up to 5 models simultaneously. Customers can use this if they are migrating to a new model or just want to get better performance on their current model. If they’re changing models, they can select their current model as a baseline and up to 4 other models. If they aren’t changing models, they just select their current model to see before and after optimization. The optimizer takes in prompt templates, example user inputs for the variable values, optional ground truth answers, and an evaluation metric or short natural language criteria to use as a guide. It’s even compatible with multimodal inputs such as jpg, png, or PDF. The prompt optimizer works in a feedback loop to steer the prompt and resulting model responses toward optimizing the evaluation metric, and outputs the original and final prompt templates with evaluation scores, cost estimates, and latency.
For region availability, see our documentation. For pricing, see the Bedrock pricing page. To get started, use the Bedrock APIs for Advanced Prompt Optimizer or visit the Bedrock Console.
Announcing general availability of Amazon EC2 M3 Ultra Mac instances
Amazon Web Services announces general availability of Amazon EC2 M3 Ultra Mac instances, powered by the latest Mac Studio hardware. Amazon EC2 M3 Ultra Mac instances are the next-generation EC2 Mac instances, that enable Apple developers to migrate their most demanding build and test workloads onto AWS. These instances are ideal for building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari. \n M3 Ultra Mac instances are powered by the AWS Nitro System, providing up to 10 Gbps network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth. These instances are built on Apple M3 Ultra Mac Studio computers featuring a 28-core CPU, 60-core GPU, 32-core Neural Engine, and 256GB of unified memory. Compared to EC2 M4 Max Mac instances, M3 Ultra Mac instances provide 2x the unified memory, 1.75x the CPU cores, 1.5x the GPU cores, and 2x the Neural Engine cores, giving Apple developers the headroom to run significantly more Xcode simulators in parallel and accelerate on-device ML workflows to improve product time to market.
Amazon EC2 M3 Ultra Mac instances are available in US East (N. Virginia) and US West (Oregon). To learn more about Amazon EC2 M3 Ultra Mac instances, visit the Amazon EC2 Mac page.
SageMaker AI now supports serverless model customization for Qwen3.6
Amazon SageMaker AI now supports serverless model customization for Qwen3.6 27B parameter model using supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). Qwen3.6 is a popular open-weight model family from Alibaba Cloud. This launch is an addition to our support for fine-tuning Qwen3.5 and other popular models. Before this launch, you could deploy Qwen3.6 base model on SageMaker AI and now, you can also adapt it to your specific domains and workflows.\n Model customization enables you to tailor foundation models with your proprietary data so they more accurately reflect your domain knowledge, terminology, and quality standards. Rather than building models from scratch, fine-tuning lets you start from a capable base model and specialize it for your use cases, whether that’s improving accuracy on domain-specific tasks, aligning outputs with your organization’s tone, or improving performance on new tasks using your labeled data. With serverless customization, SageMaker AI handles all infrastructure provisioning and training orchestration, so you can focus on your data and evaluation rather than cluster management, and only pay for what you use. Serverless model customization for Qwen3.6 on SageMaker AI is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and EU (Ireland). To get started, navigate to the Models page in Amazon SageMaker Studio to launch a customization job, or use the SageMaker Python SDK for programmatic access. To learn more, see the Amazon SageMaker AI model customization documentation.
AWS Transform agents now available in Kiro, Claude, Cursor, and Codex
Today, AWS announces that the AWS Transform agents — built on decades of AWS migration and modernization experience — are now accessible through a Kiro power, agent plugins, and via the AWS Transform MCP server. Developers can now consume all of AWS Transform’s capabilities directly from their preferred development environment, whether working interactively in an agentic IDE, managing jobs through the web console, or integrating programmatically via MCP.\n This launch gives builders flexibility to choose the surface that fits their workflow while gaining the depth of transformation expertise behind the AWS Transform agents for Windows, VMware, mainframe and more. A developer can start a transformation in their agentic IDE, monitor progress and collaborate in the web console, then see results back in their IDE — all against the same underlying job with consistent state. Additionally, AWS Transform now supports IAM role authentication. Customers who start using AWS Transform in their IDE or the web app can use their existing AWS credentials to create a Transform environment, workspace, and transformation job.
The agent plugin and MCP are available on GitHub, and the Kiro Power within the Kiro marketplace. To learn more, see https://aws.amazon.com/transform.
Today, as part of the AWS Transform composability initiative, AWS announces the general availability of the agent builder toolkit Kiro power for AWS Transform. With the agent builder toolkit, AWS Partners and customers can build agents tailored to their specific modernization needs and ensure it works seamlessly within AWS Transform.\n This capability enables Migration and Modernization Competency Partners, ISVs, or customers to create differentiated transformation solutions by integrating their specialized agents, tools, knowledge bases, and workflows with AWS Transform’s agentic AI capabilities. The agent builder toolkit provides the end-to-end lifecycle for transformation agents: build agents using the Kiro power; share them with teams or across partner networks, and register them with AWS Transform for discovery. The agent builder toolkit for AWS Transform is available in the Kiro power marketplace. To learn more, see AWS Transform (https://aws.amazon.com/transform).
AWS Transform now supports customer-owned artifact stores
AWS Transform brings assessment, migration, and modernization into a single AI-powered experience that guides enterprises through their full transformation journey. Today, AWS announces support for customer-owned Amazon S3 buckets, giving customers full control over where their transformation artifacts are stored and how they are secured.\n With this launch, you can configure your own S3 bucket, optionally encrypt artifacts with your own AWS KMS key, and manage access policies through your own AWS account. Migration practitioners can upload files directly to their bucket for immediate use by transformation agents and centralize artifact storage across multiple AWS accounts. This is designed to help enterprises in regulated industries meet data sovereignty and compliance requirements without changing how they use AWS Transform.
This capability is available in all AWS Regions where AWS Transform is offered. To learn more, see the AWS Transform User Guide.
New models for image generation and text embeddings are now available in Amazon SageMaker JumpStart
Today, AWS announced the availability of FLUX.2-klein-base-4B and Qwen3-Embedding-0.6B in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These models from Black Forest Labs and Qwen bring state-of-the-art image generation and multilingual text embedding capabilities, enabling customers to build creative AI applications and intelligent search systems on AWS infrastructure.\n These models address different enterprise AI challenges with specialized capabilities:
FLUX.2-klein-base-4B excels at real-time image generation and multi-reference editing in a compact architecture, delivering state-of-the-art quality that runs on consumer hardware with as little as 13GB VRAM. It is ideal for creative content pipelines, product visualization, rapid prototyping, and applications that require high-quality image synthesis without sacrificing speed.
Qwen3-Embedding-0.6B excels at text embedding for retrieval, classification, clustering, and bitext mining across 100+ languages, with flexible output dimensions and instruction-aware embeddings. It is ideal for building semantic search systems, RAG pipelines, multilingual document retrieval, and applications that require efficient, high-quality text representations at scale.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.
To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
Amazon Application Recovery Controller (ARC) Region Switch helps customers orchestrate the failover of their multi-Region applications to achieve a bounded recovery time in the event of a Regional impairment. Today, we are announcing the Lambda event source mapping execution block, which automates the coordinated failover of event streams for multi-Region workloads.\n Customers running event-driven architectures use Lambda functions with event source mappings to process event streams from Kinesis, DynamoDB Streams, MSK, or SQS. For active-passive workloads, customers may maintain Lambda functions in each Region but process events in only one Region at a time. These event source mappings must be toggled during failover to avoid duplicate processing—a manual, error-prone step. The Lambda event source mapping execution block automates this by enabling or disabling event source mappings in either the activating or deactivating Region. To control duplicate processing, customers can configure two Lambda event source mapping execution blocks in sequence: a disable block to stop event processing in the deactivating Region, and an enable block to start it in the activating Region. The disable block can be overridden by running the plan in “ungraceful” mode for unplanned failovers where the deactivating Region may be impaired. Native cross-account support enables a single plan to handle event stream failover across multiple accounts. To get started, see the Lambda event source mapping execution block documentation. ARC Region switch is available in all commercial Regions. See ARC Region switch availability
Today, AWS announced the availability of Qwen3-TTS-12Hz-1.7B-CustomVoice, Qwen3-TTS-12Hz-1.7B-Base, and Qwen3-ASR-1.7B in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models from Qwen bring advanced speech synthesis and recognition capabilities across 10+ languages, enabling customers to build intelligent voice-powered applications on AWS infrastructure.\n These models address different enterprise speech and audio challenges with specialized capabilities:
Qwen3-TTS-12Hz-1.7B-CustomVoice excels at multilingual text-to-speech with customizable voice styles, supporting 10 languages with instruction-driven control over timbre, emotion, and prosody. It is ideal for building real-time interactive voice applications, customer-facing virtual assistants, and content creation workflows that require natural, expressive speech output.
Qwen3-TTS-12Hz-1.7B-Base excels at multilingual text-to-speech with 3-second rapid voice cloning from audio input. It is ideal for building custom voice applications, fine-tuning domain-specific speech synthesis, and scenarios where developers need a flexible foundation model for voice generation.
Qwen3-ASR-1.7B excels at automatic speech recognition supporting 52 languages and dialects with state-of-the-art accuracy in complex acoustic environments. It is ideal for transcription services, multilingual customer support, real-time captioning, and applications that require robust streaming and offline speech-to-text.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.
To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
Two new models for agentic coding and efficient AI are now available in Amazon SageMaker JumpStart
Today, AWS announced the availability of GLM-5.1-FP8 and Phi-4-mini-instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These models from Z.ai and Microsoft bring advanced agentic capabilities and efficient inference to enterprise AI workloads on AWS infrastructure.\n These models address different enterprise AI challenges with specialized capabilities:
GLM-5.1-FP8 excels at agentic software engineering with sustained multi-round optimization, handling repository-level code generation, terminal tasks, and complex debugging workflows that improve with extended reasoning. It is ideal for automated code review pipelines, AI-powered development environments, and long-horizon problem-solving where the model iterates over hundreds of rounds to refine solutions.
Phi-4-mini-instruct excels at strong reasoning, math, and logic in memory-constrained and latency-bound environments, supporting 24 languages and function calling in a compact form factor. It is ideal for edge deployment, latency-sensitive applications, multilingual chatbots, and scenarios where customers need capable reasoning with minimal resource overhead.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.
To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
Amazon Aurora DSQL now supports change data capture (Preview)
Amazon Aurora DSQL introduces support for change data capture (CDC) in preview, enabling you to stream real-time database changes directly to Amazon Kinesis Data Streams. This fully managed capability removes the need to build or maintain custom streaming pipelines, making it easier to build event-driven applications, power real-time analytics pipelines, and synchronize data across systems.\n Aurora DSQL automatically captures the result of insert, update, and delete operations as change events. You can use these events to synchronize data across microservices, trigger downstream processing with AWS Lambda, or deliver to Amazon S3, Amazon Redshift, and Amazon OpenSearch Service through Amazon Data Firehose for analytics. CDC streaming requires no infrastructure setup and is designed to have zero impact on your database workload, so you can stream changes without affecting database throughput or latency. CDC streaming in preview is available in all AWS Regions where Aurora DSQL is available. Streams are billed using Distributed Processing Units (DPUs) based on the volume of data captured, with standard Amazon Kinesis Data Streams pricing applying separately. To learn more, read the blog and see getting started.
Reference stack outputs across accounts and Regions with AWS CloudFormation and CDK
AWS CloudFormation now supports a new intrinsic function, Fn::GetStackOutput, that enables you to reference stack outputs across AWS accounts and Regions directly within your CloudFormation templates and CDK applications. This new capability simplifies the provisioning and management of multi-account and multi-Region workloads in CloudFormation and CDK, and eliminates deployment deadlocks when restructuring cross-stack dependencies in CDK apps.\n When managing multi-account AWS environments, teams often need to share infrastructure values, such as VPC IDs or database endpoints, across account boundaries. Previously, achieving this required multiple steps, including copying values between templates or coordinating parameter updates across teams. Now, with Fn::GetStackOutput, you simply specify the target stack name, output key, an IAM role ARN for cross-account access, and optionally a Region. CloudFormation assumes the specified role, retrieves the output value, and resolves it during template processing, reducing manual coordination and the risk of configuration drift. In CDK applications, cross-account and cross-Region references now use this function automatically, eliminating the need for custom resources and SSM parameters that the previous approach required. Customers can also call Fn.getStackOutput directly to create weak references between stacks, simplifying stack refactoring. To get started, add the Fn::GetStackOutput function to your CloudFormation template and configure the appropriate IAM permissions for cross-account access. In CDK, cross-account and cross-Region references use this function automatically. Visit the AWS CloudFormation User Guide or the CDK developer guide to learn more.
This feature is available in all AWS Regions where CloudFormation is supported. Refer to the AWS Region table for service availability details.
Amazon Connect Customer now supports a permission that gives agents access to their own performance evaluations in the Connect UI, without exposing other agents’ evaluations, so they can review feedback to improve their performance. With this permission, agents can search for contacts where they have received an evaluation, view their evaluations alongside call recordings and transcripts, and submit an acknowledgment after reviewing. Agents can be granted access to view their entire department’s contacts for investigating multi-contact customer issues, while ensuring that they can only view their own evaluations. This provides operational flexibility while ensuring that agents cannot view sensitive peer performance data.\n This feature is available in all AWS regions where Amazon Connect Customer is offered. To learn more, please see our website and documentation.
Amazon RDS for PostgreSQL supports minor versions 18.4, 17.10, 16.14, 15.18, and 14.23
Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 18.4, 17.10, 16.14, 15.18, and 14.23. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes and improvements added by the PostgreSQL community. This release also adds postgis_topology support in PostGIS 3.6.3 for PostgreSQL 18, enabling you to model and query topological relationships such as network connectivity and spatial adjacency directly in your databases.\n You can upgrade your databases during scheduled maintenance windows using automatic minor version upgrades. To simplify operations at scale, enable automatic minor version upgrades and use the AWS Organizations Upgrade Rollout Policy to orchestrate thousands of upgrades in phases, first to development environments before upgrading production systems. You can also use Amazon RDS Blue/Green deployments with physical replication to minimize downtime for minor version upgrades. Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console or by using the AWS Command Line Interface (CLI).
AWS Transform adds agentic AI assistant to the AWS Toolkit for Visual Studio
To improve developer experience, AWS Transform now includes an interactive agentic AI assistant in the AWS Toolkit for Visual Studio. This enables .NET developers to modernize applications through a conversational, step-by-step guided experience directly in their IDE. The assistant provides visibility, checkpointing, and enhanced steering capabilities. So, a developer that lives in IDE can continue to work in IDE leveraging fine granular control. The agent analyzes source code, provides a detailed assessment report, and generates a transformation plan. It then executes modernization tasks interactively, allowing developers to review, edit, and approve each step before proceeding, all without switching to the web console.\n You can pause at any step, inspect generated diffs, upload a custom plan, and direct the agent with natural language. The agent automatically attempts to fix build errors encountered during transformation, provides detailed worklogs for transparency, and generates a downloadable HTML summary report upon completion along with recommended next steps. You can start a modernization project in the AWS Transform web console and continue directly in Visual Studio, with full context and progress preserved across both environments, eliminating the need to restart or reconfigure your workflow. In addition to Visual Studio, you can invoke the power of AWS Transform agents from Kiro and other AI coding assistants and coding environments. Through Kiro power for AWS Transform and AWS Transform MCP agents, you can enjoy a unified tool experience to reduce context-switching and continue iterating on transformed code in your preferred development. This capability is available in the following AWS Regions: US East (N. Virginia), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), and Asia Pacific (Tokyo). To get started, download the latest AWS Toolkit for Visual Studio from the Visual Studio Marketplace. To learn more, visit the AWS Transform for Windows .NET page.
AWS RTB Fabric supports custom domains for real-time bidding workloads
AWS RTB Fabric now supports custom domains for real-time bidding transactions received through external links. This capability helps advertising technology (AdTech) companies preserve their public endpoints and use owned domains—without requiring their partners to update their endpoint configurations.\n Endpoints (like bid.company.com/path) for real-time bidding workloads are typically representative of established, long-term traffic contracts. Modifying these endpoints requires coordination across multiple organizations, applications, and domains—which can slow set up between AdTech partners. With custom domains, AdTech companies can use their own domain name system (DNS) and configure canonical name (CNAME) public endpoints. They can also define routing rules to direct traffic to specific RTB Fabric links based on URL patterns. For example, a demand side platform (DSP) or supply side platform (SSP) can point their existing DNS server to RTB Fabric and define routing rules to map URL patterns to specific traffic sources. This allows them to seamlessly route all partner traffic through RTB Fabric without altering their own endpoint configurations. Supply partners also do not need to change their configurations.
AWS RTB Fabric helps you connect with your AdTech partners such as Amazon Ads, GumGum, Kargo, MobileFuse, Sovrn, TripleLift, Viant, Yieldmo, and more in three steps while delivering single-digit millisecond latency through a private, high-performance network environment. RTB Fabric reduces standard cloud networking costs by up to 80% and does not require upfront commitments. This capability is available in all AWS Regions where AWS RTB Fabric is supported: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). To learn more, visit the documentation or AWS RTB Fabric product page.
AWS Blogs
AWS Japan Blog (Japanese)
- SAPPHIRE 2026: AWS helps SAP customers migrate and build more quickly
- AWS Graviton-based RG instances with integrated data lake query engine have been introduced on Amazon Redshift
- AI fills in “no people” — DX frontline shown by 4 small and medium enterprises in poultry farming, disaster prevention, construction, and chemistry
- Migrating data from Amazon Aurora snapshots to Amazon Aurora DSQL
AWS Japan Startup Blog (Japanese)
AWS News Blog
AWS Big Data Blog
AWS Database Blog
- Getting started with Change Data Capture in Amazon Aurora DSQL
- Upgrade strategies for Amazon RDS for MySQL 8.0 to 8.4
- Best practices for upgrading Amazon RDS for MySQL 8.0 to 8.4 with prechecks, Blue/Green, and rollback
AWS DevOps & Developer Productivity Blog
Artificial Intelligence
- Improve bot accuracy with Amazon Lex Assisted NLU
- Real-time voice agents with Stream Vision Agents and Amazon Nova 2 Sonic
- From siloed data to unified insights: Cross-account Athena Access for Amazon Quick
- Control where your AI agents can browse with Chrome enterprise policies on Amazon Bedrock AgentCore
AWS Security Blog
- Regional routing for AWS access portals: Implementing custom vanity domains for IAM Identity Center
- Automating post-quantum cryptography readiness using AWS Config