4/21/2026, 12:00:00 AM ~ 4/22/2026, 12:00:00 AM (UTC)
Recent Announcements
Amazon SageMaker now supports multi-region replication from IAM Identity Center
Amazon SageMaker now supports multi-region replication from IAM Identity Center (IdC), enabling you to deploy SageMaker Unified Studio domains in different regions from your IdC instance. This new capability empowers enterprise customers, particularly those in regulated industries like financial services and healthcare, to maintain compliance while leveraging centralized workforce identity management.\n As an Amazon SageMaker Unified Studio administrator, you can deploy SageMaker domains closer to your workforce based on data residency needs while maintaining seamless single sign-on (SSO) access. Organizations can address use cases such as maintaining IdC in one region while processing sensitive data in compliance-required regions, supporting global operations with centralized identity management, and meeting data sovereignty requirements without compromising SSO capabilities.
To get started see the SageMaker Unified Studio documentation and to learn about setting up IAM Identity Center multi-Region support see the IAM Identity Center User Guide.
AWS Marketplace streamlines VAT payment for deemed supply transactions
AWS Marketplace now offers sellers a streamlined self-service process to submit Value Added Tax (VAT) invoices and receive automated VAT disbursements for deemed supply of digital services in the European Union, Norway, and the United Kingdom. Under the European Union, United Kingdom, and Norwegian VAT laws, when AWS Marketplace facilitates digital service sales, the law creates a deemed supply arrangement between sellers and the marketplace. To receive VAT payment, sellers are required to invoice the relevant AWS Europe, Middle East, and Africa (EMEA) SARL branch facilitating their transaction. This new capability provides sellers a unified experience within AWS Marketplace to submit VAT invoices and receive VAT payments, simplifying tax compliance under deemed supply arrangements.\n Sellers can now access the new experience through AWS Marketplace Management portal or AWS Partner Central, submit VAT invoices, track invoice status in real-time, and receive automated VAT payments. The system automatically validates invoices against mandatory fields and disburses VAT amounts once buyer payment is received. Sellers can consolidate multiple deemed supply transactions into a single invoice per period, provided they relate to the same AWS EMEA branch and currency. Sellers can also submit invoices before buyer payment is received, with the system automatically processing disbursements when all conditions are met. Enhanced reporting capabilities through the Seller Reports help sellers identify eligible transactions and reconcile disbursements for audit and financial reporting purposes. This launch eliminates the previous manual process and separate platform onboarding while reducing the administrative burden of tracking VAT invoices and payments. This capability is available for transactions where both seller and buyer AWS accounts are located in the same country when transacting via the AWS EMEA branch across 20 jurisdictions: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, and the United Kingdom. To learn more about VAT payment for deemed supply transactions and invoice submission requirements, visit the AWS Marketplace Seller Guide or VAT on Deemed Supply FAQs.
Amazon Athena Spark adds support for AWS PrivateLink
Amazon Athena Spark now supports AWS PrivateLink so that you can access APIs and endpoints from your Amazon Virtual Private Cloud (VPC) without traversing the public internet. This feature can help you meet compliance requirements by allowing you to access and use Athena Spark APIs and endpoints entirely within the AWS network.\n You can now create AWS PrivateLink interface endpoints to connect from clients in your VPC. The Athena VPC endpoint supports all Athena Spark APIs and endpoints, including the Spark Connect, Spark Live UI and Spark History Server endpoints. Communication between your VPC and Athena Spark APIs and endpoints is then conducted entirely within the AWS network, providing a secure pathway for your data. To get started, you can create an interface VPC endpoint to connect to Amazon Athena Spark using the AWS Management Console or AWS Command Line Interface (AWS CLI) commands or AWS CloudFormation. This new feature is available in all AWS Regions where Amazon Athena Spark and AWS PrivateLink are available. For more information, refer to the AWS PrivateLink documentation and Athena Spark documentation.
AWS Lambda functions can now mount Amazon S3 buckets as file systems with S3 Files
AWS Lambda now supports Amazon S3 Files, enabling your Lambda functions to mount Amazon S3 buckets as file systems and perform standard file operations without downloading data for processing. Built using Amazon EFS, S3 Files gives you the performance and simplicity of a file system with the scalability, durability, and cost-effectiveness of S3. Multiple Lambda functions can connect to the same S3 Files file system simultaneously, sharing data through a common workspace without building custom synchronization logic.\n The S3 Files integration simplifies stateful workloads in Lambda by eliminating the overhead of downloading objects, uploading results, and managing ephemeral storage limits. This is particularly valuable for AI and machine learning workloads where agents need to persist memory and share state across pipeline steps. Lambda durable functions make these multi-step AI workflows possible by orchestrating parallel execution with automatic checkpointing. For example, an orchestrator function can clone a repository to a shared workspace while multiple agent functions analyze the code in parallel. The durable function handles checkpointing of execution state while S3 Files provides seamless data sharing across all steps.
To use S3 Files with Lambda, configure your function to mount an S3 bucket through the Lambda console, AWS CLI, AWS SDKs, AWS CloudFormation, or AWS Serverless Application Model (SAM). To learn more about how to use S3 Files with your Lambda function, visit the Lambda developer guide.
S3 Files is supported for Lambda functions not configured with a capacity provider, in all AWS Regions where both Lambda and S3 Files are available, at no additional charge beyond standard Lambda and S3 pricing.
Amazon CloudWatch pipelines now supports configuration of processors via AI
Amazon CloudWatch pipelines now lets you configure log processors using natural language descriptions powered by generative AI. CloudWatch pipelines is a fully managed service that ingests, transforms, and routes log data to CloudWatch without requiring you to manage infrastructure. Setting up the right combination of processors to parse and enrich logs can be time-consuming, especially when working with complex log formats. With AI-assisted configuration, you can simply describe the processing you need in plain language and have the pipeline configuration generated for you automatically.\n When creating a pipeline in the CloudWatch console, toggle the AI-assisted option during the processing step and enter a natural language description of your desired transformations. The system generates the processor configuration along with a sample log event, so you can immediately verify the output before deploying. This reduces setup time and makes it easier to get your pipelines running correctly without needing deep familiarity with individual processor settings. AI-assisted processor configuration is available at no additional cost in all AWS Regions where CloudWatch pipelines is generally available. Standard CloudWatch Logs ingestion and storage rates still apply. To get started, open the Amazon CloudWatch console, navigate to pipelines under Ingestion, and follow the pipeline wizard. To learn more, see the CloudWatch pipelines documentation.
AWS Glue now supports OAuth 2.0 for Snowflake connectivity
Starting today, AWS Glue supports OAuth 2.0 authorization and authentication for native Snowflake connectivity, enabling customers to read from and write to Snowflake without sharing user credentials. This makes it easier for enterprises to maintain security compliance while building data integration pipelines. With OAuth support, you can now securely access Snowflake data within AWS Glue using temporary token-based authorization.\n AWS Glue provides built-in connector to Snowflake, which helps you to integrate Snowflake data with other sources on a single platform while leveraging the scalability and performance of the AWS Glue Spark engine—all without installing or managing connector libraries. Previously, connecting to Snowflake required using persistent credentials or private keys. With OAuth 2.0 support, you can now eliminate credential management entirely, relying instead on secure, temporary tokens that enhance security and simplify access control. This approach enables granular access control, allowing you to define precise permissions for different users and applications. Additionally, token-based authentication provides improved auditability, making it easier to track and monitor data access patterns across your organization. OAuth 2.0 support for AWS Glue’s Snowflake connector is available in all AWS commercial regions where AWS Glue is available. To get started with configuring your AWS Glue Snowflake connection with OAuth, visit the AWS Glue documentation.
AWS Transform custom is now available in six additional AWS Regions
AWS Transform custom is now available in six additional AWS Regions: Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (London).\n AWS Transform custom enables organizations to modernize and transform code at scale using AWS-managed and custom transformations. You can upgrade language versions, migrate frameworks, optimize performance, and analyze code bases using transformations that are ready to use or can be customized to meet your organization’s specific requirements. These transformations benefit from continuous improvement, learning from each engagement to deliver increasingly accurate and efficient results.
With this expansion, AWS Transform custom is now available in a total of eight AWS Regions: US East (N. Virginia), Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (Frankfurt, London). To learn more, visit the AWS Transform product page and user guide.
Amazon EC2 G7e instances now available in AWS Local Zones in Los Angeles
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) G7e instances in AWS Local Zones in Los Angeles, California. G7e instances feature NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and 5th generation Intel Xeon Scalable (Emerald Rapids) processors, bringing high-performance GPU compute closer to end users in Los Angeles. \n For creative workloads, you can use G7e instances to run studio workstation workloads with low-latency access to local storage, and post-production workloads including visual effects (VFX) editorial, color correction, and VFX finishing. G7e instances support enhanced real-time rendering on graphics engines and 2D/3D VFX composition software. For AI workloads, you can also use G7e instances to deploy Large Language Models (LLMs), inference, and agentic AI at the edge.
To get started, opt-in to the Los Angeles Local Zone (us-west-2-lax-1b) from AWS Global View. You can enable G7e instances from the Amazon EC2 console, AWS Command Line Interface (AWS CLI), and AWS SDKs. G7e instances are available through On Demand and Savings Plans. To learn more, visit the AWS Local Zones Features page.
Amazon Aurora serverless: Up to 30% better performance, smarter scaling, and still scales to zero
Amazon Aurora serverless — the autoscaling database that scales up to support your most demanding workloads and down to zero when you don’t need it — just got faster and smarter, with up to 30% better performance than the previous version and enhanced scaling that understands your workload. It’s especially well-suited for agentic AI applications, which typically have bursts of activity, long idle windows, and unpredictable patterns. Aurora serverless handles all of it automatically, scaling capacity with your agents rather than against them, and you only pay for what you actually use. When not in use, the database automatically scales down to zero to save cost.\n With improved performance and scaling, you can now use serverless for even more demanding workloads. The enhanced scaling algorithm enables you to efficiently run workloads where multiple tasks compete for resources, such as busy web applications and API services. These improvements are available in platform version 4 at no additional cost. All new clusters, database restores, and new clones will automatically launch on platform version 4. Existing clusters on platform version 1, 2, or 3 can upgrade directly to platform version 4 by using pending maintenance action, stopping and restarting the cluster, or using blue/green deployments. You can verify your cluster’s platform version in the AWS Console under instance configuration section or via the RDS API’s ServerlessV2PlatformVersion parameter. To learn more, read the blog. Aurora serverless is an on-demand, automatic scaling configuration for Amazon Aurora. For pricing details and Region availability, visit Amazon Aurora Pricing. To learn more, read the documentation, and get started by creating an Aurora serverless database using only a few steps in the AWS Management Console.
AWS Backup now supports Amazon Redshift Serverless namespaces and Amazon Aurora DSQL clusters as resource types in AWS Organizations backup policies. Organization administrators can now define backup policy rules that directly target these resource types across member accounts.\n Previously, backing up Redshift Serverless namespaces and Aurora DSQL clusters through organization backup policies required using tag-based selections or backing up all resources in a member account. With this launch, administrators can specify these resource types directly in their backup policy selections, providing more precise control over which resources are included in or excluded from Organization-wide backup plans.
This capability is available in all AWS Commercial and GovCloud Regions where AWS Backup and the respective services are available. To get started, visit the AWS Organizations backup policies documentation or the AWS Backup console.
AWS Lambda Durable Execution SDK for Java GA
Today, AWS announces the general availability of the AWS Lambda Durable Execution SDK for Java, empowering Java developers to build resilient, long-running workflows using Lambda durable functions. With this SDK, developers can create multi-step applications like order processing pipelines, AI agent orchestration, and human-in-the-loop approvals directly in their applications without implementing custom progress tracking or integrating external orchestration services.\n Lambda durable functions extend Lambda’s event-driven programming model with operations that checkpoint progress automatically and pause execution for up to a year when waiting on external events. The AWS Lambda Durable Execution SDK for Java provides an idiomatic Java experience for building with Lambda durable functions. It includes steps for progress tracking, callback integration for human and agent-in-the-loop workflows, durable invocation for reliable function chaining, and waits for efficient suspension. The SDK is compatible with Java 17+ and can be deployed using Lambda managed runtimes or functions packaged as container images. The local testing emulator in the SDK enables developers to build and debug locally before deploying to production.
To get started, see the Lambda durable functions developer guide and the AWS Lambda Durable Execution SDK for Java on GitHub. For Regional availability and pricing details, see the AWS Regional Services List and AWS Lambda Pricing.
Amazon Connect Outbound Campaigns now supports contact priority ordering
Amazon Connect Outbound Campaigns now allows you to dial contacts in configurable priority order based on up to 10 profile attributes for voice campaigns and voice activities in journeys. This helps you focus agent time on the most valuable customers or time-sensitive opportunities, improving campaign effectiveness and conversion rates.\n With contact priority ordering, you can sort segments on attributes such as customer lifetime value, account tier, or appointment date. For example, a financial services team can prioritize outreach to high-value accounts nearing contract renewal, or a healthcare provider can ensure patients with the earliest upcoming appointments are contacted first. Initial dial attempts always take precedence over reattempts, ensuring your priority order is maintained throughout campaign execution.
This capability is available in all AWS Regions where Amazon Connect Outbound Campaigns is offered at no additional cost. To get started, configure sort attributes when building segments in Amazon Connect Customer Profiles. To learn more, see the Amazon Connect Outbound Campaigns best practice and how to build customer segments.
AWS Blogs
AWS Japan Blog (Japanese)
- Innovation sandbox on AWS with real-time analytics dashboards
- AI-Driven Business Process Re-Engineering: The Day We Stopped Asking “What’s Your Problem?”
- AI-driven approach to business transformation: “What are the challenges?” The day I stopped listening
- Track Amazon Bedrock costs by caller with IAM principal-based cost allocation
- Windows Server Licenses Now Available on Amazon EVS: A Step-by-Step Guide
- [Event Report & Material Release] Graduation from trial! Full-scale utilization of Kiro’s specification-driven development in Osaka
AWS Open Source Blog
AWS Architecture Blog
AWS Big Data Blog
AWS Database Blog
AWS for Industries
Artificial Intelligence
- From developer desks to the whole organization: Running Claude Cowork in Amazon Bedrock
- End-to-end lineage with DVC and Amazon SageMaker AI MLflow apps
Networking & Content Delivery
- Centralized ingress inspection architecture in AWS Cloud WAN
- Automated network incident response with AWS DevOps Agent