5/7/2026, 12:00:00 AM ~ 5/8/2026, 12:00:00 AM (UTC)
Recent Announcements
Amazon EC2 G6 instances now available in AWS European Sovereign Cloud (Germany)
Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are available in AWS European Sovereign Cloud (Germany). G6 instances can be used for a wide range of graphics-intensive and machine learning (ML) use cases.\n Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization. G6 instances are also well-suited for graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage. In addition to AWS European Sovereign Cloud (Germany), Amazon EC2 G6 instances are available today in the AWS US East (N. Virginia and Ohio), US West (Oregon), Europe (Frankfurt, London, Paris, Spain, Stockholm and Zurich), Asia Pacific (Mumbai, Tokyo, Malaysia, Seoul and Sydney), South America (Sao Paulo), Middle East (UAE) and Canada (Central) Regions. Customers can purchase G6 instances as On-Demand Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page.
Amazon EC2 X8i instances are now available in additional regions
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8i instances are available in the Europe (Ireland) and Asia Pacific (Mumbai) regions. These instances are powered by custom Intel Xeon 6 processors available only on AWS. X8i instances are SAP-certified and deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. They deliver up to 43% higher performance, 1.5x more memory capacity (up to 6TB), and 3.3x more memory bandwidth compared to previous generation X2i instances.\n X8i instances are designed for memory-intensive workloads like SAP HANA, large databases, data analytics, and Electronic Design Automation (EDA). Compared to X2i instances, X8i instances offer up to 50% higher SAPS performance, up to 47% faster PostgreSQL performance, 88% faster Memcached performance, and 46% faster AI inference performance. X8i instances come in 14 sizes, from large to 96xlarge, including two bare metal options. To get started, visit the AWS Management Console. X8i instances can be purchased via Savings Plans, On-Demand instances, and Spot instances. For more information visit X8i instances page.
Amazon SageMaker Unified Studio adds identity and user management features
Amazon SageMaker Unified Studio announces new administration features that give administrators more control over identity configuration and user management for both IAM and Identity Center domain types.\n In SageMaker IAM domains, administrators can now onboard users through single sign-on by configuring AWS IAM Identity Center. After configuration, administrators can add IAM roles, IAM users, IAM Identity Center users, and IAM Identity Center groups as project members. Teams can collaborate on project data and resources regardless of how individual members authenticate. Administrators can set up IAM Identity Center integration in the SageMaker Unified Studio admin portal. A new domain user management page for SageMaker IAM domains gives administrators a consolidated view of all users active in the domain, where they can manage access and update permissions from a single screen. In SageMaker Identity Center domains, users can now access the SageMaker Unified Studio portal by federating through an IAM role. SageMaker Unified Studio creates a unique user session for each federated user, so users sharing the same role don’t overwrite each other’s work. Administrators can audit individual actions even when multiple users share a single IAM role. With these features, customers can use IAM identity or IAM Identity Center corporate identity across both domain types, giving teams flexibility to collaborate in SageMaker Unified Studio regardless of their authentication method. These features are available in the following AWS Regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), US East (N. Virginia), US East (Ohio), and US West (Oregon). To learn more, visit the SageMaker Unified Studio documentation.
Amazon EC2 G7e instances now available in Europe (London) region
Starting today, Amazon EC2 G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are now available in Europe (London) region. G7e instances offer up to 2.3x inference performance compared to G6e.\n Customers can use G7e instances to deploy large language models (LLMs), agentic AI models, multimodal generative AI models, and physical AI models. G7e instances offer the highest performance for spatial computing workloads as well as workloads that require both graphics and AI processing capabilities. G7e instances feature up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, with 96 GB of memory per GPU, and 5th Generation Intel Xeon processors. They support up to 192 virtual CPUs (vCPUs) and up to 1600 Gbps of networking bandwidth. G7e instances support NVIDIA GPUDirect Peer to Peer (P2P) that boosts performance for multi-GPU workloads. Multi-GPU G7e instances also support NVIDIA GPUDirect Remote Direct Memory Access (RDMA) with EFA in EC2 UltraClusters, reducing latency for small-scale multi-node workloads.
You can use G7e instances for Amazon EC2 in the following AWS Regions: US West (Oregon), US East (N. Virginia, Ohio), Europe (Spain, London) and Asia Pacific (Tokyo, Seoul). You can purchase G7e instances as On-Demand Instances, Spot Instances, or as part of Savings Plans.
To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit G7e instances.
AWS Capabilities by Region now supports availability notifications
Today, AWS announces availability notifications for AWS Capabilities by Region in AWS Builder Center, a new subscription-based system that automatically alerts builders when an AWS service(s) and/or features(s) become available in their target Regions. Availability notifications make it easy for builders to track availability of 1,500+ services and features across 37 AWS Regions, accelerating infrastructure planning and deployment decisions.\n With availability notifications, builders can subscribe at the service level through AWS Builder Center UI, and the subscription automatically covers all underlying features across selected Regions, so there’s no need to track each feature individually. Notifications are delivered through two channels: instantaneous in-app alerts within AWS Builder Center, and a consolidated weekly email digest. Subscriptions and notification preferences can be managed through Settings > Notifications in AWS Builder Center. Common use cases include tracking a specific capability launch, monitoring service parity across AWS Regions, and preparing for upcoming migrations or Regional expansions. For example, a solutions architect expanding a generative AI application into new Regions can subscribe to Amazon Bedrock and receive automatic updates as Knowledge Bases, Guardrails, and other features become available.
AWS Elemental MediaTailor launches Monetization Functions
AWS Elemental MediaTailor now supports monetization functions, a new capability that lets customers customize how MediaTailor builds ad decision server (ADS) requests and manages session data during ad-personalized playback. With monetization functions, customers can call external APIs and run inline data transformations at defined points in the playback session — eliminating the need to build and operate middleware between the player and the ADS.\n Common use cases include resolving hashed email addresses into privacy-compliant identity envelopes through providers such as LiveRamp, appending contextual metadata from a content management system to every ad request through providers like GraceNote, activate header bidding workflows through providers like The Trade Desk and running A/B tests across multiple ad decision servers. Monetization functions are fail-open by design: if a function encounters an error, exceeds its timeout, or hits a resource limit, MediaTailor discards the output and proceeds with default ad-insertion behavior, so viewers’ playback is never interrupted.
Monetization functions is available at general availability in all AWS regions where AWS Elemental MediaTailor operates. You are billed per lifecycle hook invocation at a flat rate that does not depend on the number, type, or complexity of functions. For full details, see the MediaTailor pricing page, the Monetization Functions section of the MediaTailor User Guide, and the MediaTailor product page.
AWS Advanced JDBC Wrapper now provides client-side encryption
The AWS Advanced JDBC Wrapper now provides column-level client-side encryption through its KMS Encryption plugin. The wrapper provides advanced capabilities such as failover handling, AWS authentication integration, and enhanced monitoring for Amazon Aurora and Amazon RDS open source databases. It enables Java applications to encrypt sensitive data before it reaches the database without changing application code.\n Database encryption at rest and TLS in transit are foundational security controls. However, with these controls decrypt the data within the database engine. A compromised credential, overprivileged administrator, or SQL injection attack can expose sensitive data in plaintext, creating compliance risk under PCI DSS, HIPAA, and GDPR. The KMS Encryption plugin closes this gap by working at the JDBC driver level. When your application writes to an encrypted column, the plugin encrypts the value before it reaches the database. When reading, it decrypts the value before returning it. Plaintext remains visible only to your application, while the database sees encrypted values. The database can verify data integrity through HMAC validation without needing the encryption key. The plugin integrates seamlessly with your existing SQL, Spring, Hibernate, and connection pool setup without requiring code changes. The KMS Encryption plugin works with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible databases. The plugin is available as an open-source project under the Apache 2.0 license. To learn more, see AWS Advanced JDBC Wrapper documentation.
Amazon Connect Outbound Campaigns adds multi-contact time zone detection
Amazon Connect Outbound Campaigns now detects customer time zones using all phone numbers and addresses on a customer profile, not just the primary contact fields. Previously, time zone detection used only the primary phone number, which could miss customers who span multiple time zones.\n When a profile’s contact information spans multiple time zones, the system delivers only during hours that fall within your configured window in every detected time zone, and skips profiles when no overlap exists. For example, if a customer has a mobile number with an Eastern time area code and a business number with a Pacific time area code, and your campaign is configured for 9am–5pm delivery, messages will only be sent between 12pm–5pm ET (9am–2pm PT), when both time zones fall within the allowed window.
This capability is available in all AWS Regions where Amazon Connect Outbound Campaigns is offered at no additional cost. To learn more, see the Amazon Connect Outbound Campaign documentation.
Amazon EC2 M8gn and M8gb instances are now available in AWS Europe (Ireland) region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8gn and M8gb instances are available in the AWS Europe (Ireland) region. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors, and feature the latest 6th generation AWS Nitro Cards. M8gn instances offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. M8gb offer up to 300 Gbps of EBS bandwidth to provide higher EBS performance compared to same-sized equivalent Graviton4-based instances.\n M8gn are ideal for network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function (UPF). M8gb are ideal for workloads requiring high block storage performance such as high performance databases and NoSQL databases.
M8gn instances offer instance sizes up to 48xlarge and metal-48xl, up to 768 GiB of memory, up to 600 Gbps of networking bandwidth, and up to 120 Gbps of bandwidth to Amazon Elastic Block Store (EBS). They also support EFA networking on the 16xlarge, 24xlarge, 48xlarge sizes, metal-24xl, and metal-48xl. M8gb instances offer sizes up to 48xlarge and metal-48xl, up to 768 GiB of memory, up to 300 Gbps of EBS bandwidth, and up to 400 Gbps of networking bandwidth. They support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge sizes, metal-24xl, and metal-48xl., which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.
The new instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon), and Europe (Ireland). Metal sizes are available in US East (N. Virginia) region. To learn more, see Amazon EC2 M8gn and M8gb Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page.
Amazon SageMaker HyperPod now supports AMI-based node lifecycle configuration for Slurm clusters
Amazon SageMaker HyperPod now supports AMI-based configuration that provisions Slurm cluster nodes with the software and configurations needed for a production-ready environment to run AI/ML training workloads. This removes the need to download, configure, or upload lifecycle configuration scripts to Amazon S3. With fewer operational steps to prepare a cluster and no lifecycle configuration scripts executing during node provisioning, cluster creation time is significantly reduced, so you can start running jobs sooner.\n AMI-based configuration includes required software such as Docker, Enroot, and Pyxis, and configurations such as Slurm accounting, SSH key generation, Slurm log rotation and user home directory setup. To enable AMI-based configuration, omit the LifeCycleConfig block from the instance group configuration when creating clusters using the CreateCluster API, or when using the SageMaker AI console, select “None” under Lifecycle scripts in Custom setup. For additional customization on top of the AMI-based configuration baseline, an extension script can be provided, allowing you to focus only on what capabilities and software to add, such as user configuration, observability, or LDAP integration.
Extension scripts can be configured when creating clusters through both the API and the SageMaker AI console. Using the CreateCluster API, specify the new OnInitComplete parameter and SourceS3Uri in the LifeCycleConfig block. Via the console, provide the S3 URI to the extension script in the “Extension script file in S3” field in Custom setup. For advanced use cases that require full control over provisioning, custom lifecycle configuration scripts remain fully supported through both the API and the SageMaker AI console.
This feature is available in all AWS Regions where SageMaker HyperPod is available. To get started with creating HyperPod Slurm clusters with AMI-based node lifecycle configuration, see Getting started with SageMaker HyperPod using the AWS CLI or Getting started with SageMaker HyperPod using the SageMaker AI console in the SageMaker AI developer guide.
AWS India customers can now use UPI Scan and Pay for sign-up and payments
India customers can now use UPI (Unified Payments Interface) Scan and Pay to sign up for AWS or make payments to their invoices.\n UPI is a popular and convenient payment method in India, which facilitates instant bank-to-bank transfers between two parties through mobile phones with internet. The new Scan and Pay experience simplifies payments by allowing customers to scan a QR code displayed on the AWS Console using their UPI mobile app (such as Google Pay, PhonePe, Paytm, or Amazon Pay), eliminating the need to manually enter a UPI ID. This enhancement makes the UPI payment experience more secure, convenient, and error-free for customers signing up for AWS or making one-time payments. Scan and Pay reduces friction and aligns with how customers commonly use UPI for everyday transactions. Customers can also set up UPI AutoPay using Scan and Pay for automatic monthly payments up to INR 15,000. To use this feature, customers log in to the AWS Console and select UPI as their payment method during signup or when making a payment. A QR code is displayed on screen, which customers scan using their UPI mobile app to verify and authorize the transaction. To learn more, see Managing Payment Methods in India.
Introducing Amazon EC2 R8idn and R8idb instances
AWS is announcing the general availability of Amazon EC2 R8idn and Amazon EC2 R8idb instances, powered by custom sixth generation Intel Xeon Scalable processors, available only on AWS. These instances also feature the latest sixth generation AWS Nitro cards. R8idn and R8idb deliver up to 43% better compute performance per vCPU compared to previous generation R6in instances.\n Amazon EC2 R8idn instances offer up to 600 Gbps network bandwidth, the highest network bandwidth among enhanced networking EC2 instances, combined with up to 22,800 GB of local NVMe instance storage. Amazon EC2 R8idb instances deliver up to 300 Gbps EBS bandwidth and up to 1,440K IOPS, the highest EBS performance among non-accelerated compute EC2 instances. R8idn instances are ideal for memory-intensive workloads requiring high network throughput and local storage, such as in-memory databases, real-time big data analytics, and large-scale distributed caching layers. R8idb instances are ideal for memory-intensive workloads requiring high block storage performance, such as large-scale commercial databases, high-performance file systems, and enterprise analytics platforms. Amazon EC2 R8idn and R8idb instances are available in US East (N. Virginia, Ohio), US West (Oregon), and Europe (Spain). R8idn and R8idb instances are available via Savings Plans, On-Demand, and Spot instances. For more information, visit the Amazon EC2 R8i instance page.
Introducing Amazon EC2 M8idn and M8idb instances
AWS is announcing the general availability of Amazon EC2 M8idn and Amazon EC2 M8idb instances, powered by custom sixth generation Intel Xeon Scalable processors, available only on AWS. These instances also feature the latest sixth generation AWS Nitro cards. M8idn and M8idb deliver up to 43% better compute performance per vCPU compared to previous generation M6idn instances.\n Amazon EC2 M8idn instances offer up to 600 Gbps network bandwidth, the highest network bandwidth among enhanced networking EC2 instances. Amazon EC2 M8idb instances deliver up to 300 Gbps EBS bandwidth, the highest EBS performance among non-accelerated compute EC2 instances. M8idn instances are ideal for network-intensive general purpose workloads requiring local storage, such as distributed compute, data analytics, and high-performance file systems. M8idb instances are ideal for storage-intensive general purpose workloads such as large commercial databases, data lakes, and NoSQL databases that benefit from both high EBS throughput and low-latency local NVMe storage. Amazon EC2 M8idn and Amazon EC2 M8idb instances are available in US East (N. Virginia), US West (Oregon), and Europe (Spain). M8idn and M8idb instances are available via Savings Plans, On-Demand, and Spot instances. For more information, visit the Amazon EC2 M8i instance page.
Agents that transact: Amazon Bedrock AgentCore now includes Payments (preview)
Today, Amazon Bedrock AgentCore announces the preview of AgentCore payments, enabling AI agents to autonomously access and pay for APIs, MCP servers, web content, and other agents. Built in partnership with Coinbase and Stripe, AgentCore payments is the first managed payment capabilities purpose-built for autonomous agents, handling the full payment lifecycle from wallet authentication through transaction execution to spending governance and observability. As AI agents become more capable and services shift to pay-per-use models built for machine consumption, developers need infrastructure that lets their agents transact without building bespoke billing integrations, credential management, orchestration logic, budgeting, and observability from scratch.\n With AgentCore payments, developers connect a Coinbase CDP wallet or Stripe Privy wallet as a payment connection, set session-level spending limits, and their agent transacts autonomously during execution. When an agent encounters a paid resource and receives an HTTP 402 response, AgentCore handles the x402 protocol negotiation, wallet authentication, stablecoin payment, and proof delivery back to the endpoint, all without interrupting the agent’s reasoning loop. Spending limits are enforced deterministically at the infrastructure layer, and every transaction is observable through the same logs, metrics, and traces developers already use in AgentCore. The Coinbase x402 Bazaar MCP server is also available through AgentCore Gateway, providing over 10,000 x402 endpoints that agents can search, discover, and pay for autonomously.
AgentCore payments is available in preview in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Sydney). Learn more about it through the blog, deep dive using the documentation, and get started with the AgentCore CLI.
AWS Resource Explorer is now available in AWS GovCloud (US-East) and (US-West)
We are pleased to announce that AWS Resource Explorer, a managed capability that simplifies the search and discovery of resources, is now available in the AWS GovCloud Regions (US-East) and (US-West).\n You can search for your AWS resources either using the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console. From the search results displayed in the console, you can go to your resource’s service console and Region with a single step, and take action.
To turn on AWS Resource Explorer, visit the AWS Resource Explorer console. Read about getting started in our AWS Resource Explorer documentation, or explore the AWS Resource Explorer product page.
Amazon RDS for SQL Server now supports instances powered by AMD EPYC processors
Amazon RDS for SQL Server now supports M8a and R8a instances powered by 5th Generation AMD EPYC processors. On RDS for SQL Server, R8a and M8a instances deliver up to 70% higher throughput than comparable x86 instances for commonly used instance sizes. \n Each vCPU in M8a and R8a instances corresponds to a physical CPU core, designed to deliver consistent per-core performance. For workloads with high I/O requirements, M8a and R8a instances provide up to 75 Gbps of network bandwidth and 60 Gbps of Amazon EBS bandwidth. Additionally, M8a and R8a instances support the RDS for SQL Server optimize CPU feature, which allows customers to reduce their vCPU-based Microsoft SQL Server licensing charges by adjusting the number of vCPUs enabled on their instance. All instances are built on the AWS Nitro System using sixth-generation Nitro Cards. Amazon RDS for SQL Server M8a and R8a instances are available in all commercial AWS Regions where these instances are offered in Amazon EC2. Customers can purchase these instances using On-Demand pricing or as part of their Database Savings Plan. To learn more, visit the Amazon RDS for SQL Server pricing page and Amazon RDS User Guide.
Amazon OpenSearch Service now supports VPC egress for private connectivity to resources in your VPC
Amazon OpenSearch Service now supports the VPC egress option, which allows your virtual private cloud (VPC) domain to establish private network connections to resources in your VPC, such as ML models, AWS services, and custom applications, without exposing traffic to the public internet.\n When you enable the VPC egress option, OpenSearch Service adds network interfaces to the subnets you selected for the domain and routes outbound traffic into your VPC. You can enable or disable the VPC egress option using the Amazon OpenSearch Service console, AWS CLI, or the CreateDomain and UpdateDomainConfig API operations.
VPC egress is now supported in all AWS Regions where Amazon OpenSearch Service is available. To get started, refer to Routing domain egress traffic through your VPC.
AWS Blogs
AWS Japan Blog (Japanese)
AWS Database Blog
- Migrating data from an Amazon Aurora snapshot into Amazon Aurora DSQL
- Announcing Valkey 9.0 for Amazon ElastiCache
- Full-text, exact-match, range, and hybrid search on Amazon ElastiCache
- Announcing aggregations on Amazon ElastiCache
- Valkey turns two
AWS for Industries
Artificial Intelligence
- Secure short-term GPU capacity for ML workloads with EC2 Capacity Blocks for ML and SageMaker training plans
- Overcoming reward signal challenges: Verifiable rewards-based reinforcement learning with GRPO on SageMaker AI
- Agents that transact: Introducing Amazon Bedrock AgentCore payments, built with Coinbase and Stripe
AWS for M&E Blog
AWS Security Blog
- ICYMI: April 2026 @AWS Security
- AWS achieves SNI 27017, SNI 27018, and SNI 9001 certifications for the AWS Asia Pacific (Jakarta) Region