3/26/2026, 12:00:00 AM ~ 3/27/2026, 12:00:00 AM (UTC)
Recent Announcements
Amazon SageMaker Studio launches support for Kiro and Cursor IDEs as remote IDEs
Today, AWS announces the ability to remotely connect from Kiro and Cursor IDEs to Amazon SageMaker Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro and Cursor setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker Studio. By connecting Kiro and Cursor to SageMaker Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services.\n SageMaker Studio, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software), and VS Code IDE as remote IDE. Starting today, you can also use your customized local Kiro and Cursor setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. You can authenticate using the AWS Toolkit extension in Kiro or Cursor or through SageMaker Studio’s web interface. Once authenticated, connect to any of your SageMaker Studio development environments in a few simple clicks. You maintain the same security boundaries as SageMaker Studio’s web-based environments while developing AI models and analyzing data in local IDE of your choice - Kiro or Cursor. To learn more, refer to the SageMaker user guide.
Aurora DSQL launches connector that simplifies building Ruby applications
Today we are announcing the release of the Aurora DSQL Connector for Ruby (pg gem) that makes it easy to build Ruby applications on Aurora DSQL. The Ruby Connector streamlines authentication and eliminates security risks associated with traditional user-generated passwords by automatically generating tokens for each connection, ensuring valid tokens are always used while maintaining full compatibility with existing pg gem features.\n The connector handles IAM token generation, SSL configuration, and connection pooling, enabling customers to scale from simple scripts to production workloads without changing their authentication approach. It also provides opt-in optimistic concurrency control (OCC) retry with exponential backoff, custom IAM credential providers, and AWS profile support, giving customers flexibility in how they manage their AWS credentials and handle transient failures. To get started, visit the Connectors for Aurora DSQL documentation page. For code examples, visit our Github page for the Ruby connector. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.
AWS Lambda increases the file descriptor limit from 1,024 to 4,096, a 4x increase, for functions running on Lambda Managed Instances (LMI). This capability enables customers to run I/O intensive workloads such as high-concurrency web services, and file-heavy data processing pipelines, without running into file descriptor limits. LMI enables you to run Lambda functions on managed Amazon EC2 instances with built-in routing, load-balancing, and auto-scaling, giving you access to specialized compute configurations including the latest-generation processors and high-bandwidth networking, with no operational overhead.\n Customers use Lambda functions to build a wide range of serverless applications such as event-driven workloads, web applications, and AI-driven workflows. These applications rely on file descriptors for operations such as opening files, establishing network socket connections to external services and databases, and managing concurrent I/O streams for data processing. Each open file, network socket, or internal resource consumes one file descriptor. Today, Lambda supports a maximum of 1,024 file descriptors. However, LMI allows multiple requests to be processed simultaneously, which often requires higher number of file descriptors. With this launch, AWS Lambda is increasing the file descriptor limit to 4,096, allowing customers to run I/O intensive workloads, maintain larger connection pools, and effectively utilize multi-concurrency for functions running on LMI. This feature is available in all AWS Regions where AWS Lambda Managed Instances is generally available. To get started, visit the AWS Lambda Managed Instances documentation.
AWS AppConfig adds enhanced targeting during feature flag rollout
AWS AppConfig enhances its deployment capabilities with new controls that allow customers to target feature flag and configuration data values to specific segments or individual users during the lifecycle of a gradual roll-out.\n One of AWS AppConfig’s key safety guardrails is the ability for customers to roll out feature flag or configuration data changes slowly, over the course of minutes or hours. This progressive delivery allows customers to move safer, and limit the impact of unexpected changes. AWS AppConfig uses customer-provided entity identifiers to make specific feature flag or dynamic configuration data “sticky” to individual target segments during the lifecycle of these gradual roll-outs. This targeting capability, using AppConfig Agent, ensures fine-grained control, including using an individual user ID or IDs, while updates are being deployed.
The AWS Advanced JDBC Wrapper now supports automatic query caching with Valkey
The AWS Advanced JDBC Wrapper now supports automatically caching JDBC queries with Valkey, including Amazon ElastiCache for Valkey caches. Previously, developers who needed to cache JDBC query result sets had to manually write code to store and retrieve data from the cache for each query. Now you can automatically cache result sets from your Aurora and RDS PostgreSQL, MySQL, and MariaDB databases in just a few short steps. Simply add the wrapper dependency, enable the query cache plugin, configure database and cache endpoints, and indicate which queries to cache in your application code.\n
With this capability, you can store and retrieve query results directly from ElastiCache for Valkey, reducing the number of database reads and lowering read latency for frequently accessed data. Automated query caching can improve performance, lower costs, and increase application resilience by reducing database resource requirements. The AWS Advanced JDBC Wrapper supports annotating queries for caching using popular persistence APIs and frameworks including Hibernate and Spring Data, as well as manual query hinting. JDBC query caching with the AWS Advanced JDBC Wrapper works seamlessly with Amazon ElastiCache for Valkey. You can create a new Amazon ElastiCache for Valkey serverless cache with the AWS Management Console, Software Development Kit (SDK), Command Line Interface (CLI), or Model Context Protocol (MCP) server. For more information, see the Advanced JDBC Wrapper and Amazon ElastiCache for Valkey documentation.
Amazon EC2 R8gd instances are now available in additional AWS Regions
Amazon Elastic Compute Cloud (Amazon EC2) R8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in US West (N. California), Asia Pacific (Seoul, Hong Kong, Jakarta), Africa (Cape Town), and Canada West (Calgary) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage.\n Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon R8gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Amazon EC2 M8a instances now available in AWS Europe (Ireland) region
Starting today, the general-purpose Amazon EC2 M8a instances are available in AWS Europe (Ireland) region. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances.\n M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are up to 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. M8a instances are built using the latest sixth generation AWS Nitro Cards and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page.
Amazon EC2 M8a instances now available in AWS GovCloud (US-West) region
Starting today, the general-purpose Amazon EC2 M8a instances are available in AWS GovCloud (US-West) region. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances.\n M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are up to 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. M8a instances are built using the latest sixth generation AWS Nitro Cards and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page.
AWS Storage Gateway Terraform modules now support Amazon Linux 2023
AWS Storage Gateway Terraform modules now enable Amazon Linux 2023-based deployments, delivering improved security, reliability, and operational simplicity for Infrastructure as Code (IaC) provisioning. The updated modules support all gateway types including Amazon S3 File Gateway, Tape Gateway, and Volume Gateway in both Amazon EC2 and VMware environments.\n You can use the new Terraform modules to deploy AL2023-based gateways that enforce IMDSv2 by default for EC2 deployments, protecting against credential theft and server-side request forgery (SSRF) attacks. The update prevents unexpected gateway replacements during routine Terraform operations and simplifies Active Directory integration with optional domain controller configuration. EC2-based gateways now support optional Elastic IP address (EIP) association, enabling fully private gateway activations.
To get started, download the Terraform Storage Gateway module. To learn more, visit the AWS Storage Gateway product page or the Storage Gateway User Guide. See the AWS Region Table for complete regional availability.
AWS Blogs
AWS Japan Blog (Japanese)
- Best practices for deploying the AWS DevOps Agent in production environments
- Amazon CloudFront supports mTLS authentication to the origin
- How organizations will change in the AI era — Jeff Barr talks about the future of development teams and Mitsubishi Electric’s challenges
- AWS Weekly Roundup: NVIDIA Nemotron 3 Super, Nova Forge SDK, Amazon Corretto 26 and more on Amazon Bedrock (2026/3/23)
- Amazon Quick is now available in the AWS Asia Pacific (Tokyo) region
AWS Japan Startup Blog (Japanese)
AWS News Blog
AWS Architecture Blog
AWS Big Data Blog
- Introducing enhancements to Amazon EMR Managed Scaling
- Build AWS Glue Data Quality pipeline using Terraform
AWS Contact Center
AWS Database Blog
AWS HPC Blog
Artificial Intelligence
- Run Generative AI inference with Amazon Bedrock in Asia Pacific (New Zealand)
- Building age-responsive, context-aware AI with Amazon Bedrock Guardrails
- Accelerating LLM fine-tuning with unstructured data using SageMaker Unified Studio and S3
- Introducing Amazon Polly Bidirectional Streaming: Real-time speech synthesis for conversational AI