7/17/2025, 12:00:00 AM ~ 7/18/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon RDS for SQL Server now supports linked servers with Oracle OLEDB Driver version 21.16
Amazon RDS for SQL Server now supports linked servers with Oracle OLEDB Driver version 21.16.\n Linked servers allow customers to access external datasources from within their Amazon RDS for SQL Server database. Linked servers with Oracle OLEDB can be used in SQL Server Standard or Enterprise Editions, and with SQL Server 2017, 2019, or 2022 to read data from and run commands against remote Oracle database servers using Oracle Database Version 18c, 19c or 21c. Oracle OLEDB Driver version 21.16 provides improved performance and reliable access to Oracle databases. Refer the Amazon RDS User Guide to learn more about linked server integrations. Amazon RDS for SQL Server support for linked servers is available in all AWS regions where Amazon RDS for SQL Server is available.
Announcing Amazon DynamoDB local major version release version 3.0.0
Today, DynamoDB local, the downloadable version of Amazon DynamoDB, migrates to AWS SDK for Java 2.x. In alignment with our SDKs and Tools Maintenance Policy, this migration ensures DynamoDB local stays up-to-date in security, compatibility, and stability for developers developing and testing their DynamoDB applications locally. With the new DynamoDB local version 3.0.0, now you can fully remove the dependency on the AWS SDK for Java 1.x. You can upgrade your application utilizing previous DynamoDB local versions to DynamoDB local 3.0.0 by making these following updates to your codebase:\n
All import statements referencing the current com.amazonaws namespace needs to be updated to the new software.amazon.dynamodb namespace.
If running DynamoDB local as an Apache Maven dependency, reference the new DynamoDB Maven repository in your application’s Project Object Model (POM) file. See DynamoDB Local Sample Java Project.
If running DynamoDB local in embedded mode using client class name AmazonDynamoDB, reference Client Changes on how to migrate from AWS SDK version 1 to AWS SDK version 2.
To learn more about the AWS SDK for Java 2.x, see AWS SDK for Java 2.x. For more information about DynamoDB local, see Setting Up DynamoDB Local (Downloadable Version).
AWS Clean Rooms ML now supports Parquet file format
Starting today, AWS Clean Rooms now supports training custom ML models on data in Parquet file format. Parquet is a free and open-source column-oriented data storage format that provides efficient data compression and encoding schemes with enhanced performance.\n With AWS Clean Rooms ML custom modeling, you and your partners can train a custom ML model using collective datasets at scale without having to share sensitive intellectual property. By creating ML input channels in Parquet file format, you can process large volumes of data more efficiently and encode non-text based data allowing you to train on images, and other binary encoded datatypes. AWS Clean Rooms ML helps you and your partners apply privacy-enhancing controls to safeguard your proprietary data and ML models while generating predictive insights—all without sharing or copying one another’s raw data or models. For more information about the AWS Regions where AWS Clean Rooms ML is available, see the AWS Regions table. To learn more, visit AWS Clean Rooms ML.
Amazon ECS enables built-in blue/green deployments
Amazon Elastic Container Service (Amazon ECS) announces new features that make software updates for your containerized applications safer, allowing you to ship software faster and with high confidence, without needing to build custom deployment tooling. Amazon ECS now supports a built-in blue/green deployment strategy and deployment lifecycle hooks that allow you to test new application versions in production environments and quickly rollback failed deployments.\n You can now deploy software updates to Amazon ECS services which serve traffic from an Application Load Balancer (ALB), Network Load Balancer (NLB), or ECS Service Connect with a blue/green deployment strategy. When you use a blue/green deployment strategy, Amazon ECS provisions the new application version alongside the old, and allows you to validate the new application version before routing production traffic to it. You can use deployment lifecycle hooks to perform custom validation steps, and block the deployment until validation succeeds. Furthermore, once production traffic has been shifted, you can let the new application bake for a pre-specified period, and rollback to the old version without incurring downtime if you detect a regression. To detect failures automatically, you can configure Amazon CloudWatch Alarms and ECS deployment circuit breaker to monitor your deployments. Together, these capabilities help make your software updates safer, allowing you to ship new capabilities faster.
You can use blue/green deployments and deployment lifecycle hooks for new and existing Amazon ECS services in all commercial AWS Regions using the AWS Management Console, SDK, CLI, CloudFormation, CDK, and Terraform by following the steps on the blog. For more details, see our documentation.
Amazon SNS enhances cross-Region delivery capabilities
We’re excited to announce that Amazon Simple Notification Service (Amazon SNS) has expanded its cross-Region delivery capabilities, providing more flexibility for customers using opt-in Regions. This update brings three improvements:\n
Amazon SNS to Amazon SQS delivery between opt-in Regions: You can now deliver messages from an Amazon SNS topic in one opt-in Region to an Amazon SQS queue in another opt-in Region.
Amazon SNS to AWS Lambda delivery between opt-in Regions: Message delivery from an Amazon SNS topic in one opt-in Region to an AWS Lambda function in another opt-in Region is now supported.
Amazon SNS to AWS Lambda delivery from default to opt-in Regions: You can now deliver messages from an Amazon SNS topic in a default-enabled Region to an AWS Lambda function in an opt-in Region.
These enhancements provide greater flexibility in designing distributed systems across AWS Regions, making it easier to leverage opt-in Regions in your architectures. To use these new capabilities, ensure that you’ve enabled the required opt-in Regions for your account. When configuring cross-region subscriptions involving opt-in Regions, remember to use the Region-specific service principal (sns..amazonaws.com) in your resource policies. For more information on working with opt-in Regions refer to AWS Account Management documentation. To learn more about Amazon SNS cross-Region deliveries, please refer to Amazon SNS documentation.
AWS Lambda enables developers to debug functions running in the cloud from VS Code IDE
AWS Lambda now supports remote debugging in Visual Studio Code (VS Code), enabling developers to debug their Lambda functions running in the cloud directly from their local IDE. With this new capability, developers can use familiar debugging tools like breakpoints, variable inspection, and step-through debugging with functions deployed in the cloud without modifying their existing development workflow, thus accelerating their serverless development process.\n Developers building serverless applications with Lambda often need to test and debug cross-service integrations involving multiple AWS services that may be attached to Amazon Virtual Private Cloud (VPC) or require specific AWS Identity and Access Management (IAM) permissions. Previously, in absence of tools to fully replicate the Lambda runtime environment and its interactions with other AWS services locally, developers had to rely on print statements, logs, and multiple iterative deployments to diagnose and resolve issues. With remote debugging in VS Code, developers can now debug the execution environment of the function running in the cloud with complete access to VPC resources and IAM roles and trace execution through entire service flows in the cloud. Developers can also quickly make updates to their function and test the changes. This launch eliminates the need for complex local debugging setups and repeated deployments, reducing the time to identify and fix issues from hours to minutes. This feature is now available to all developers with the AWS Toolkit (v3.69.0 or later) installed on their VS Code, at no additonal cost. To get started, select a Lambda function in VS Code IDE and click “Invoke Remotely”. You can then start a remote debugging session with a single click. The AWS Toolkit will automatically download the function code, establish a secure debugging connection, and enable breakpoint setting. To learn more, visit the AWS News blog post, AWS Toolkit documentation, and Lambda developer guide.
AWS Lambda announces low latency processing for Kafka events
AWS Lambda now supports low latency (sub 100ms) event processing for Amazon Managed Streaming for Apache Kafka (Amazon MSK) and self-managed Apache Kafka event sources in Provisioned mode for Kafka ESM. Customers can now set their MaximumBatchingWindowInSeconds parameter to 0 in Kafka ESM configurations, enabling real-time processing of Kafka events. This enhancement significantly reduces end-to-end processing latency for time-sensitive business applications.\n Kafka customers increasingly build mission-critical applications that require consistent end-to-end latency of less than 100ms to meet stringent business requirements across industries. Examples include financial services firms processing market data feeds and executing algorithmic trades, e-commerce platforms providing real-time personalized recommendations, and gaming companies managing live player interactions. With today’s launch, Lambda natively supports low latency event processing with efficient optimization of polling and invoking Kafka events, allowing customers’ to build mission-critical or latency-sensitive Kafka applications on Lambda. With MaximumBatchingWindowInSeconds set to 0, Kafka ESM invokes the function with Kafka events immediately after the previous invocation completes. This configuration makes end-to-end latency solely dependent on function duration, thus potentially providing average 50ms end-to-end latencies for critical real-time applications. This feature is generally available in all AWS Commercial Regions where AWS Lambda Kafka ESM is available, except Israel (Tel Aviv), Asia Pacific (Malaysia), and Canada West (Calgary). To enable your Kafka ESM for low latency processing, set MaximumBatchingWindowInSeconds set to 0 and enable Provisioned mode for your new or existing Kafka ESM in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. To learn more, read Lambda ESM documentation and AWS Lambda pricing.
AWS Lambda bridges console to VS Code for unified serverless development experience
AWS Lambda now enables seamless transition from console to Visual Studio Code (VS Code) IDE. This new console-to-IDE integration eliminates the friction between cloud and local development environments for serverless applications. \n Developers starting in the console require more sophisticated development capabilities of local IDE as applications evolve in complexity. Previously, they had to manually configure their local development environment which included IDE installation, copying function code, configurations, and integration settings before they could begin development. This was time-consuming and interrupted development workflow. With the new console-to-IDE integration, developers can now transition their Lambda functions to VS Code with a single click, preserving code and configurations. This enables developers to use advanced IDE capabilities like external dependency management (using package managers like npm and pip), using development tools like linters and formatters, etc., without the setup overhead. This launch also introduces a new capability in VS Code IDE which enables developers to easily convert their applications to an AWS Serverless Application Model (AWS SAM) templates, simplifying their Infrastructure as Code (IaC) practices and CI/CD pipeline integration. To get started, click the “Open in Visual Studio Code” button in the Lambda console’s Code tab or the Getting Started popup when creating new functions. This will automatically open VS Code IDE on your local device or take you through a guided process to install required tools including VS Code and AWS Toolkit. To learn more about this experience, visit the AWS News blog post, Lambda developer guide, and AWS Toolkit for VS Code documentation. This feature is available in all commercial AWS Regions where Lambda is available, except AWS GovCloud (US) Regions, at no additional cost.
Amazon DynamoDB Streams now supports a new ShardFilter parameter in the DescribeStream API to simplify and optimize the consumption of streaming data. You can use ShardFilter parameter to quickly discover child shards after a parent shard has been closed, significantly improving efficiency and responsiveness when processing data from DynamoDB Streams.\n DynamoDB Streams is a serverless data streaming feature that makes it straightforward to track, process, and react to item-level changes in DynamoDB tables in near real time. DynamoDB Streams enables diverse change data capture use cases, including building event-driven applications, data replication, auditing, and implementing data analytics and machine learning capabilities. Applications consuming data from DynamoDB Streams can efficiently transition from reading a closed shard to its child shard using this optional ShardFilter parameter, avoiding repeated calls to the DescribeStream API to retrieve and traverse the shard map for all closed and open shards. This API enhancement helps ensure smoother transitions and lower latency when switching between shards, making your stream processing applications more responsive and cost-effective. The new ShardFilter parameter is available in all AWS Regions. You can get started with the feature by using the AWS API, Kinesis Client Library (KCL) 3.0, or Apache Flink connector for DynamoDB Streams. Customers that use AWS Lambda to consume DynamoDB Streams will automatically benefit from this enhanced API experience. For more information, refer to Working with DynamoDB Streams in the DynamoDB Developer Guide and API Reference for DescribeStream.
Amazon Connect agent workspace now includes real-time agent performance metrics
The Amazon Connect agent workspace now includes an out-of-box analytics dashboard that provides agents with insights into their individual performance, such as contacts handled and average handle time. The dashboard also show agents their assigned queue metrics, such as contacts in queue and longest wait time. These insights help agents improve their performance and make decisions that enhance customer experience. For example, agents can delay breaks when they observe high queue volumes, helping to reduce customer wait times.\n Amazon Connect agent workspace analytics dashboard is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (London). To learn more and get started, visit the webpage and documentation.
AWS Outposts now supports booting Amazon EC2 instances from external storage arrays
Starting today, customers can boot Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts using boot volumes backed by NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™, including authenticated and encrypted volumes. This enhancement supports both iSCSI SAN boot and LocalBoot options, with LocalBoot supporting both iSCSI and NVMe-over-TCP protocols. Complementary to fully managed Amazon EBS and Local Instance Store volumes, this capability extends our existing support for external data volumes to now include boot volumes from third-party storage arrays, providing customers with greater flexibility in how they leverage their storage investments with Outposts.\n With this new feature, customers can maximize value from their on-premises storage investments while leveraging the cloud operational model of Outposts. They can now use their compatible enterprise storage arrays for both boot and data volumes, benefiting from advanced data management features and high performance. The support for external boot volumes enables customers to streamline operating system (OS) management through centralized boot volume management, while helping meet data residency requirements through on-premises storage arrays. To simplify the process, AWS offers automation scripts through AWS Samples to help customers easily set up and use external boot volumes with EC2 instances on Outposts. This enhancement is available on Outposts 2U servers and Outposts racks at no additional charge in all AWS Regions where Outposts is available, except the AWS GovCloud Regions. See the FAQs for Outposts servers and Outposts racks for the latest availability information.
You can use the AWS-provided automation scripts to get started with external boot volumes on Outposts. To learn more about implementation details and best practices, check out this blog post or visit our technical documentation for Outposts servers, second-generation Outposts racks, and first-generation Outposts racks.
Amazon MemoryDB now supports an AWS FIS action to pause multi-Region cluster replication
Amazon MemoryDB now supports an AWS Fault Injection Service action to pause replication for multi-Region clusters. FIS is a fully managed service for running controlled fault injection experiments to improve an application’s performance, observability, and resilience. Amazon MemoryDB Multi-Region is a fully managed, active-active, multi-Region database that lets you build multi-Region applications with up to 99.999% availability and microsecond read and single-digit millisecond write latencies. Customers can use the new FIS action to observe how their application responds to a disruption in regional replication, and tune their monitoring and recovery process to improve resiliency and application availability.\n MemoryDB Multi-Region enables you to build multi-Region applications that need high availability, increased application resiliency, and improved business continuity. This new FIS action reproduces the real-world behavior when replication in a multi-Region cluster is interrupted and resumed. This lets you test and build confidence that your application responds as intended when resources in a Region are not accessible. You can create an experiment template in FIS to integrate the experiment with continuous integration and release testing and to combine with other FIS actions. For example, MemoryDB Pause Replication is combined with other actions in the Cross-Region: Connectivity scenario to isolate a Region. MemoryDB Multi-Region Pause Replication is now available in all AWS Regions where MemoryDB Multi-Region is available. To learn more, visit the MemoryDB FIS actions documentation.
Amazon RDS for PostgreSQL 18 Beta 2 is now available in Amazon RDS Database Preview Environment
Amazon RDS for PostgreSQL 18 Beta 2 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 18 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 18 Beta 2 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.\n PostgreSQL 18 includes “skip scan” support for multicolumn B-tree indexes and improves WHERE clause handling for OR and IN conditions. It introduces parallel GIN index builds and updates join operations. Observability improvements show buffer usage counts and index lookups during query execution, along with per-connection I/O utilization metric. Please refer the RDS PostgreSQL release documentation for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
Amazon AppStream 2.0 expands support for GPU based instances
Today, Amazon AppStream 2.0 announces support for Graphics G6 instances, which are built on the EC2 G6 family, designed to cater to graphics-intensive applications. Amazon EC2 G6 instances feature NVIDIA L4 Tensor Core GPUs and third generation AMD EPYC processors.\n With this launch, AppStream 2.0 introduces 9 Graphics G6 instance sizes, featuring system memory ranging from 16 GB to 384 GB and vCPUs from 4 to 96. Seven G6 instances maintain a 1:4 vCPU to memory ratio and include a full GPU capabilities, excluding the g6.12xlarge and g6.24xlarge instances, each of which has 4 GPUs. Two Gr6 instances, with 4xlarge and 8xlarge sizes, feature a 1:8 vCPU to memory setup. This range of new graphics instances allows you to choose the right price and performance for your graphics-intensive applications. Graphics G6 instance family support is available in 13 AWS Regions, including US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Paris, Frankfurt, London), Asia Pacific (Tokyo, Mumbai, Sydney, Seoul), and South America (Sao Paulo) and AWS GovCloud (US-West). AppStream 2.0 offers pay-as-you-go pricing, see Amazon AppStream 2.0 Pricing for more information. To get started, select an AppStream 2.0 Graphics G6 instance when launching an image builder or creating a new fleet. You can launch Graphics G6 instances using either the AWS management console or the AWS SDK. To learn more, see AppStream 2.0 Instance Families.
Amazon OpenSearch Service now supports integration with Amazon Aurora MySQL and PostgreSQL
Amazon OpenSearch Service now allows seamless ingestion of data from Amazon Aurora MySQL and PostgreSQL, enabling customers to take advantage of advanced search capabilities like full-text, hybrid and vector search on their data in relational databases. Customers can now synchronize their data from Amazon Aurora MySQL and PostgreSQL to Amazon OpenSearch Service within seconds of it being written without the need to write custom code to build and maintain complex data pipelines for extract, transform and load operations.\n This integration uses Amazon OpenSearch Ingestion to synchronize the data from Amazon Aurora MySQL and PostgreSQL databases to OpenSearch Service. Amazon OpenSearch Ingestion is able to automatically understand the format of the data in the Amazon Aurora MySQL and PostgreSQL tables and map the data to your index mapping templates in Amazon OpenSearch Service to yield the most performant search results. Furthermore, customers can synchronize data from multiple tables in their Amazon Aurora MySQL and PostgreSQL databases into one Amazon OpenSearch managed cluster or serverless collection to offer holistic insights across several applications. The integration of Amazon OpenSearch Service with Amazon Aurora MySQL and PostgreSQL is available in all the 16 regions that Amazon OpenSearch Ingestion is available in today: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), and Canada (Central). To learn more about this feature, see the Amazon OpenSearch Service Developer Guide and the launch blog.
Amazon OpenSearch Service now supports integration with Amazon RDS for MySQL and PostgreSQL
Amazon OpenSearch Service now allows seamless ingestion of data from Amazon RDS for MySQL and PostgreSQL, enabling customers to take advantage of advanced search capabilities like full-text, hybrid and vector search on their data in relational databases. Customers can now synchronize their data from Amazon RDS for MySQL and PostgreSQL to Amazon OpenSearch Service within seconds of it being written without the need to write custom code to build and maintain complex data pipelines for extract, transform and load operations.\n This integration uses Amazon OpenSearch Ingestion to synchronize the data from Amazon RDS for MySQL and PostgreSQL databases to OpenSearch Service. Amazon OpenSearch Ingestion is able to automatically understand the format of the data in the Amazon RDS for MySQL and PostgreSQL tables and map the data to your index mapping templates in Amazon OpenSearch Service to yield the most performant search results. Furthermore, customers can synchronize data from multiple tables in their Amazon RDS for MySQL and PostgreSQL databases into one Amazon OpenSearch managed cluster or serverless collection to offer holistic insights across several applications. The integration of Amazon OpenSearch Service with Amazon RDS for MySQL and PostgreSQL is available in all the 16 regions that Amazon OpenSearch Ingestion is available in today: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), and Canada (Central). To learn more about this feature, see the Amazon OpenSearch Service Developer Guide and the launch blog.
Amazon S3 Multi-Region Access Points are now available in 12 additional AWS Regions
Amazon S3 Multi-Region Access Points are now available in 12 additional AWS opt-in Regions: Asia Pacific (Jakarta, Hong Kong, Hyderabad, and Melbourne), Europe (Zurich, Spain, and Milan), Middle East (Bahrain and UAE), Canada West (Calgary), Africa (Cape Town) and Israel (Tel Aviv) Regions.\n To get started, you need to first enable the AWS opt-in Region for your account by using the steps outlined here. Next, you can use the AWS CLI or AWS SDK to create an S3 Multi-Region Access Point in an AWS opt-in Region. For pricing information, visit the Amazon S3 pricing page. To learn more about S3 Multi-Region Access Points, visit the feature page, S3 User Guide, or S3 FAQs.
AWS Glue now supports new workers for larger and memory intensive workloads
AWS Glue now offers new additional worker types to meet diverse data integration and data processing needs. The new workers include larger G.12X and G.16X general compute workers, and four new memory optimized workers, R.1X, R.2X, R.4X, and R.8X, for memory intensive AWS Glue workloads. Glue customers are now able to handle more complex transforms, aggregations, joins, and queries and are able to process higher volumes of data quickly with Apache Spark.\n The new G.12X and G.16X workers extend the existing G worker sizes, offering more compute, memory, and storage. These workers are ideal for customers with large and resource-intensive workloads. The new R.1X, R.2X, R.4X, and R.8X workers provide double the memory compared to G workers, making them suitable workloads with memory-intensive Spark operations like caching, shuffling, and aggregating. Customers can select these new worker types in AWS Glue Studio, using notebooks or Visual ETL, or via the Glue Job APIs. For more information on these new worker types and AWS Regions where the new workers are available, visit the AWS Glue documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- AWS JumpStart Zero For FSI for young financial engineers will be held!
- [AWS JumpStart Next] AWS JumpStart Next #1 Event Report: A place where connections between technology and people are born
- [Event Report] AWS Summit Japan 2025 Logistics Industry Booth Exhibition “Warehouse x OCR x Generated AI Agent”
- Connect Amazon WorkSpaces Personal with AWS PrivateLink
- How to use data from the AWS Open Data Program with Amazon Bedrock
- AWS Weekly Roundup: AWS Builder Center, Amazon Q, Oracle Database @AWS, etc. (July 14, 2025)
AWS News Blog
- Simplify serverless development with console to IDE and remote debugging for AWS Lambda
- AWS AI League: Learn, innovate, and compete in our new ultimate AI showdown
- Accelerate safe software releases with new built-in blue/green deployments in Amazon ECS
AWS Cloud Operations Blog
AWS Big Data Blog
- Integrating Amazon OpenSearch Ingestion with Amazon RDS and Amazon Aurora
- Scale your AWS Glue for Apache Spark jobs with R type, G.12X, and G.16X workers
AWS Compute Blog
Desktop and Application Streaming
AWS DevOps & Developer Productivity Blog
AWS for Industries
Artificial Intelligence
- Evaluating generative AI models with Amazon Nova LLM-as-a-Judge on Amazon SageMaker AI
- Building cost-effective RAG applications with Amazon Bedrock Knowledge Bases and Amazon S3 Vectors
- Implementing on-demand deployment with customized Amazon Nova models on Amazon Bedrock
- Building enterprise-scale RAG applications with Amazon S3 Vectors and DeepSeek R1 on Amazon SageMaker AI
AWS for M&E Blog
AWS Storage Blog
- Enhancing FSx for Windows security: AI-powered anomaly detection
- Copy objects between any Amazon S3 storage classes using S3 Batch Operations