5/11/2026, 12:00:00 AM ~ 5/12/2026, 12:00:00 AM (UTC)
Recent Announcements
Announcing Region Expansion of G6e instances on SageMaker Studio notebooks
We are pleased to announce general availability of Amazon EC2 G6e instances in the Middle East (Dubai), Asia Pacific (Tokyo, Seoul) and Europe (Frankfurt, Stockholm, Spain) on SageMaker Studio notebooks.\n Amazon EC2 G6e instances are powered by up to 8 NVIDIA L40s Tensor Core GPUs with 48 GB of memory per GPU and third generation AMD EPYC processors. G6e instances deliver up to 2.5x better performance compared to EC2 G5 instances. Customers can use G6e instances to interactively test model deployment and for interactive model training use cases such as generative AI fine-tuning. You can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
Announcing Region Expansion of G6 instances on SageMaker Studio notebooks
We are pleased to announce general availability of Amazon EC2 G6 instances in the Middle East (Dubai) and Asia Pacific (Malaysia) on SageMaker Studio notebooks.\n Amazon EC2 G6 instances are powered by up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. G6 instances offer 2x better performance for deep learning inference compared to EC2 G4dn instances. Customers can use G6 instances to interactively test model deployment and for interactive model training for use cases such as generative AI fine-tuning and inference workloads, natural language processing, language translation, computer vision, and recommender engines.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
Announcing Region Expansion of P4de instances on SageMaker Studio notebooks
We are pleased to announce general availability of Amazon EC2 P4de instances in Asia Pacific (Tokyo, Singapore) and Europe (Frankfurt) on SageMaker Studio notebooks.\n Amazon EC2 P4de instances are powered by 8 NVIDIA A100 GPUs with 80GB high-performance HBM2e GPU memory, 2X higher than the GPUs in our current P4d instances. The new P4de instances provide a total of 640GB of GPU memory, which provide up to 60% better ML training performance along with 20% lower cost to train when compared to P4d instances. The improved performance will allow customers to reduce model training times and accelerate time to market. Increased GPU memory on P4de will also benefit workloads that need to train on large datasets of high-resolution data. Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
Amazon Aurora DSQL is now available in five additional AWS Regions
Amazon Aurora DSQL single-Region clusters are now available in Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Europe (Stockholm), and South America (Sao Paulo). Aurora DSQL is the fastest serverless, distributed SQL database that enables you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resilience effortless for your applications and offers the fastest distributed SQL reads and writes.\n With this launch, Aurora DSQL is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Canada West (Calgary), Asia Pacific (Hong Kong), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), , Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Paris), Europe (Stockholm), and South America (Sao Paulo). Get started with Aurora DSQL for free with the AWS Free Tier. To learn more, visit the Aurora DSQL webpage and documentation.
AWS HealthOmics now supports caching of cancelled workflow runs
AWS HealthOmics now supports caching completed task outputs of cancelled runs, enabling customers to reuse outputs and avoid recomputing previously completed tasks. When caching is enabled and a run is cancelled, HealthOmics automatically stores completed task outputs in the customer’s S3 bucket, allowing customers to restart runs from the point of cancellation. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs at scale with fully managed bioinformatics workflows.\n Caching of cancelled runs helps researchers, bioinformaticians, and workflow developers debug and iteratively develop workflows efficiently by storing intermediate files and completed task outputs for inspection. This saves customers the cost of recomputing completed tasks that may have taken hours and accelerates subsequent runs by executing only the remaining incomplete tasks.
Caching cancelled runs is now available for Nextflow, WDL, and CWL runs in all AWS HealthOmics regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Israel (Tel Aviv), and Asia Pacific (Singapore, Seoul). To learn more, visit the workflow cache documentation.
AWS WAF introduces dynamic label interpolation for custom request and response handling
AWS WAF now supports dynamic label interpolation, enabling you to forward WAF classification signals to your origin and embed context in responses with a single rule. Security engineers who previously maintained a separate rule for every signal value can now use ${namespace:} syntax in custom request headers, response headers, and response bodies to forward an entire label namespace at once. For example, one rule with a dynamic variable can forward all IP reputation signals to your application, which can then respond adaptively, such as by enforcing multi-factor authentication (MFA).\n Interpolation also introduces synthetic labels: built-in values resolved from request context, including client IP address, WAF request ID, and JA3 and JA4 fingerprints. You can embed these in custom block pages and challenge pages so users reporting false positives have a reference ID to cite, or forward TLS fingerprints to your application for adaptive auth decisions. Interpolation works with any label namespace, including AWS Managed Rules, AWS Marketplace rule groups, and your own custom labels. Headers automatically adapt as new labels are added to the namespace, and when multiple labels match, values resolve to a comma-separated list.
Dynamic label interpolation is available in all AWS Regions where AWS WAF is available at no additional cost. There are no new API fields or configuration steps. To get started, see Dynamic label interpolation in the AWS WAF Developer Guide, or explore the sample on GitHub.
Claude Platform on AWS is now generally available
Today, AWS announced the general availability of Claude Platform on AWS, a new service that gives customers direct access to Anthropic’s native Claude Platform experience through their existing AWS account. AWS is the first cloud provider to offer access to the native Claude Platform experience. Developers and organizations now have the choice to access Anthropic’s native Claude Platform experience, including APIs, console, and early-access beta features, directly through their existing AWS account, without managing separate accounts, billing, or tracking.\n Claude Platform on AWS is operated by Anthropic, and customer data is processed outside the AWS security boundary. Claude Platform on AWS is designed for development teams and enterprises that want access to Anthropic’s native Claude Platform development experience and do not have specific regional data residency requirements. Customers still use existing IAM credentials and access controls, consolidated AWS billing, and CloudTrail audit logging for full security visibility. Features available through Claude Platform on AWS include Claude Managed Agents (beta), advisor strategy (beta), web search, web fetch, code execution, files API (beta), Skills (beta), MCP connector (beta), prompt caching, citations, batch processing, and the Claude Console for prompt development and evaluation.
Claude Platform on AWS is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Europe (Dublin), Europe (London), Europe (Frankfurt), Europe (Milan), Europe (Zurich), Europe (Paris), Europe (Stockholm), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Parcific (Melborune), Asia, Pacific (Jakarta), Asia Pacific (Sydney), and Asia Pacific (Melbourne). To learn more, visit the Claude Platform on AWS product page. To get started, see the Claude Platform on AWS documentation.
AWS Transform adds containerization capability during migrations
AWS Transform now supports replatforming applications to containers during migration to AWS. This release extends AWS Transform’s agentic AI capabilities to automate the containerization of your source code, enabling you to migrate and modernize in parallel, reducing the time and complexity of moving from on-premises to cloud-native architectures. Migration teams can containerize source code from GitHub, Bitbucket, GitLab, or .zip files, generate Docker images, publish to Amazon Elastic Container Registry (Amazon ECR), and deploy to Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS). This brings containerization into the same workflow your team uses to plan and execute rehost migrations.\n AWS Transform analyzes your source code repositories, generates Dockerfiles, and builds container images with integrated security scanning for common vulnerabilities and exposures (CVEs). It produces deployment-ready Terraform infrastructure-as-code and Helm charts for your target environment. The service supports monolithic repositories (monorepos) and multi-repo structures, private dependency resolution through AWS CodeArtifact, and containerization of thousands of applications at scale. During migration wave planning, you can assign applications to either a rehost or replatform-to-containers path, so you can move and realize the benefits of AWS faster. This new capability is available in all AWS Regions where AWS Transform is offered.
To learn more, please visit the AWS Transform User Guide.
AWS Blogs
AWS Japan Blog (Japanese)
- 3-person development in 2 days ─ realistic responses obtained by the Hitachi Group’s first AI-DLC implementation
- Contribution: Introduction of “Efforts to Promote the Utilization of JPX Owned Data (J-LAKE) Using Amazon Quick Sight” by JPX Research Institute Co., Ltd.
- AWS Weekly — 2026/5/4
- Weekly Generative AI with AWS — 2026/5/4
- Yamato Protec built an electronic document storage system using AI in just 2 days
- AWS MCP Server is now generally available
- Ramen Yamaoka Family’s Iceberg on AWS data pipeline realized with Fivetran’s CDC function
- Modernize your workflow: Amazon WorkSpaces launches dedicated desktop for AI agents (preview)
AWS News Blog
AWS Architecture Blog
AWS Big Data Blog
- How to use streamlined permissions for Amazon S3 Tables and Iceberg materialized views
- Improve DynamoDB analytics with AWS Glue zero-ETL schema and partition controls
- How to build a cross-Region resilience for Amazon OpenSearch Service with Amazon MSK
AWS Database Blog
AWS DevOps & Developer Productivity Blog
Artificial Intelligence
- Building web search-enabled agents with Strands and Exa
- Introducing Claude Platform on AWS: Anthropic’s native platform, through your AWS account
- Manufacturing intelligence with Amazon Nova Multimodal Embeddings
- How Miro uses Amazon Bedrock to boost software bug routing accuracy and improve time-to-resolution from days to hours
- Amazon Quick: Accelerating the path from enterprise data to AI-powered decisions
Networking & Content Delivery
- How FIS centralized 13,000 VPC endpoints to strengthen security and simplify operations
- Network connectivity patterns for agents deployed on Amazon Bedrock AgentCore Runtime
- Migrate from Static Routing to Dynamic BGP Routing on AWS Site-to-Site VPN
- Building production-ready DNS infrastructure with AWS CDK
- Enhanced security with DMZ architecture using Amazon VPC Block Public Access