5/15/2025, 12:00:00 AM ~ 5/16/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon OpenSearch Ingestion increases memory for an OCU to 15 GB
We are pleased to announce that the memory allocation per OpenSearch Compute Unit (OCU) for Amazon OpenSearch Ingestion has been increased from 8GB to 15GB. One OCU now comes default with 2vCPU and 15GB of memory, allowing customers to leverage greater in-memory processing for their data ingestion pipelines without modifying existing configurations.\n With the increased memory per OCU, Amazon OpenSearch Ingestion is better equipped to handle memory-intensive processing tasks such as trace analytics, aggregations, and enrichment operations. Customers can now build more complex and high-throughput ingestion pipelines with reduced risk of out-of-memory failures. The increased memory for OCUs are now available in all AWS Regions where Amazon OpenSearch Ingestion is currently offered at no additional cost. You can take advantage of these improvements by updating your existing pipelines or creating new pipelines through the Amazon OpenSearch Service console or APIs at no additional cost. To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.
SES Mail Manager adds Debug Logging for traffic policies
Today, Simple Email Service (SES) Mail Manager announces the addition of a Debug logging level for Mail Manager traffic policies. This new logging level provides more detailed visibility on incoming connections to a customer’s Mail Manager ingress endpoint and makes it easier to troubleshoot delivery challenges quickly, using familiar event destinations such as Cloudwatch, Kinesis, and S3.\n With Debug level logs, customers can now log every possible evaluation and action within a Mail Manager traffic policy, along with envelope data for the email message being evaluated for traffic permission. This enables customers to determine whether their traffic policy is working as expected or to isolate incoming message parameters which are not covered by the current configuration. When used in conjunction with rules engine logging, debug logging for traffic policies charts a full picture of message arrival into Mail Manager and its disposition by the rules engine. Debug logging for traffic policies is intended to be used during active troubleshooting but otherwise left disabled, as its output can be verbose for high-volume Mail Manager instances. While SES does not charge an additional fee for this logging feature, customers may incur costs from their chosen event destination. Debug logging for traffic policies is available in all 17 AWS non-opt-in Regions within the AWS commercial partition. To learn more about Mail Manager logging options, see the SES Mail Manager Logging Guide.
AWS Parallel Computing Service (PCS) now supports accounting with Slurm version 24.11
AWS Parallel Computing Service (PCS) now supports Slurm version 24.11 with support for managed accounting. Using this feature, you can enable accounting on your PCS clusters to monitor cluster usage, enforce resource limits, and manage fine-grained access control to specific queues or compute node groups. PCS manages the accounting database for your cluster, eliminating the need for you to setup and manage a separate accounting database.\n You can enable this feature on your PCS cluster in just a few clicks using the AWS Management Console. Visit our getting started and accounting documentation pages to learn more about accounting and see release notes to learn more about Slurm 24.11. AWS Parallel Computing Service (AWS PCS) is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads and build scientific and engineering models on AWS using Slurm. To learn more about PCS, refer to the service documentation. For pricing details and region availability, see the PCS Pricing Page and AWS Region Table.
AWS CodeBuild announces support for remote Docker servers
AWS CodeBuild now supports remote Docker image build servers, allowing you to speed up image build requests. You can provision a fully managed Docker server that maintains a persistent cache across builds. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages.\n Centralized image building increases efficiency by reusing cached layers and reducing provisioning plus network transfer latency. CodeBuild automatically configures your build environment to use the remote server when running Docker commands. The Docker server is then readily available to run parallel build requests that can each use the shared layer cache, reducing the overall build latency and optimizing build speed. This feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. Get started with CodeBuild’s blog post for setting up a Docker image builder in your CodeBuild project, or visit our documentation. To learn how to get started with CodeBuild, visit the AWS CodeBuild product page.
AWS Transform for .NET is now generally available
AWS Transform for .NET, previewed as “Amazon Q Developer transformation capabilities for .NET porting,” is now generally available. As the first agentic AI service for modernizing .NET applications at scale, AWS Transform helps you to modernize Windows .NET applications to be Linux-ready up to four times faster than traditional methods and realize up to 40% savings in licensing costs. It supports transforming a wide range of .NET project types including MVC, WCF, Web API, class libraries, console apps, and unit test projects.\n The agentic transformation begins with a code assessment of your repositories from GitHub, GitLab, or Bitbucket. It identifies .NET versions, project types, and interproject dependencies and generates a tailored modernization plan. You can customize and prioritize the transformation sequence based on your business objectives or architectural complexity before initiating the AI-powered modernization process. Once started, AWS Transform for .NET automatically converts application code, builds the output, runs unit tests, and commits results to a new branch in your repository. It provides a comprehensive transformation summary, including modified files, test outcomes, and suggested fixes for any remaining work. Your teams can track transformation status through the AWS Transform dashboards or interactive chat and receive email notifications with links to transformed .NET code. For workloads that need further human input, your developers can continue refinement using the Visual Studio extension in AWS Transform. The scalable experience of AWS Transform enables consistent modernization across a large application portfolio while moving to cross-platform .NET, unlocking performance, portability, and long-term maintainability.
AWS Transform for .NET is now available in the following AWS Regions: US East (N. Virginia) and Europe (Frankfurt).
To learn more, read the blog, visit the webpage, or review the documentation.
AWS Transform for mainframe is now generally available
AWS Transform for mainframe, previewed as “Amazon Q Developer transformation capabilities for mainframe” at re:Invent 2024, is now generally available. AWS Transform is the first agentic AI service for modernizing mainframe applications at scale—accelerating modernization of IBM z/OS applications from years to months.\n Powered by a specialized AI agent leveraging 19 years of AWS experience, AWS Transform streamlines the entire transformation process—from initial analysis and planning to code documentation and refactoring—helping organizations to modernize faster, reduce risk and cost, and achieve better outcomes in the cloud.
This release introduces significant new capabilities. Enhanced analysis features help teams identify cyclomatic complexity, homonyms, and duplicate IDs across codebases, with new export and import functions for file classification and in-UI file viewing and comparison. Documentation generation now supports larger codebases with improved performance and recovery capabilities, including an AI-powered chat experience for querying generated documentation.
Teams can use improved decomposition features to manage dependencies and domain creation, while new deployment templates streamline environment setup for modernized applications. The service also introduces flexible job management, allowing teams to modify objectives and focus on specific transformation steps during reruns.
AWS Transform for mainframe is available in the following AWS Regions: US East (N. Virginia) and Europe (Frankfurt).
To learn more, read the blog post, register for the upcoming launch webinar, or get started in the AWS Transform web experience.
Announcing migration assessment capabilities of AWS Transform
Today, AWS announces the general availability of migration assessment capabilities in AWS Transform. Migration assessment in AWS Transform analyzes your IT environment to simplify and optimize your cloud journey with intelligent, data-driven insights and actionable recommendations. Simply upload your infrastructure data and AWS Transform will deliver a comprehensive analysis that typically takes weeks in just minutes.\n Powered by agentic AI, AWS Transform removes weeks of manual analysis by providing instant visibility into your infrastructure and automatically discovering cost optimization opportunities. AWS Transform produces a business case including key highlights from your server inventory, a summary of current infrastructure, multiple TCO scenarios with varying purchase commitments (on-demand and reserved instances), operating system licensing options (bring your own licenses and license-included), and tenancy options.
AWS Transform for migration assessments is now available in the following AWS Regions: US East (N. Virginia) and Europe (Frankfurt).
Ready to get started? Visit the AWS Transform web experience or read our blog post to learn more.
Amazon SageMaker Catalog launches governance for S3 Tables
Amazon SageMaker Catalog integrates with Amazon S3 Tables, making it easy to discover, share, and govern S3 Tables for users to access and query the data with all Apache Iceberg–compatible tools and engines. With Amazon SageMaker Catalog, built on Amazon DataZone, users can securely discover and access approved data and models using semantic search with generative AI–created metadata, or just ask Amazon Q Developer with natural language to find your data.\n S3 Tables deliver the first cloud object store with built-in Apache Iceberg support. Data publishers can onboard S3 tables to SageMaker Lakehouse and enhance their discoverability by adding them to the SageMaker Catalog. Publishers have the flexibility to either directly publish tables or enrich them with valuable business metadata, making it easier for all users to understand and find the data they need. On the consumption side, users can search for relevant tables, request access through a subscription workflow (subject to publisher approval), and leverage this data for advanced analytics and AI development projects. This end-to-end workflow significantly improves data accessibility, governance, and utilization of S3 Tables across the organization. SageMaker Catalog with S3 Tables support is available in all AWS Regions where Amazon SageMaker is available. To learn more, visit Amazon SageMaker. Get started with S3 Tables and publish using user documentation.
AWS Glue Studio now supports additional file types and single file output
Today, AWS Glue Studio announces support for additional compressed file types, Excel files (as source), and XML and Tableau’s Hyper files (as target). We are also introducing the option to select the number of output files for an S3 target. These enhancements will allow you to use visual ETL jobs for additional data processing workflows not supported today, for example loading data from an Excel file into a single XML file output.\n The new experience will now enable you to have one single file as the output of your Glue job, or to specify a custom number for the output files. Further, Glue now supports Excel files via S3 file source nodes, and XML or Tableau Hyper files for S3 file target nodes. New compression types that will be available to use are: LZ4 , SNAPPY, DEFLATE, LZO, BROTLI, ZSTD and ZLIB. These new features are now available in all AWS commercial Regions and AWS GovCloud (US) Regions where AWS Glue is available. Access the AWS Regional Services List for the most up-to-date availability information. To learn more, visit the AWS Glue documentation.
Amazon EC2 P6-B200 instances powered by NVIDIA B200 GPUs now generally available
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B200 instances, accelerated by NVIDIA B200 GPUs. Amazon EC2 P6-B200 instances offer up to 2x performance compared to P5en instances for AI training and inference. \n P6-B200 instances feature 8 Blackwell GPUs with 1440 GB of high-bandwidth GPU memory and a 60% increase in GPU memory bandwidth compared to P5en, 5th Generation Intel Xeon processors (Emerald Rapids), and up to 3.2 terabits per second of Elastic Fabric Adapter (EFAv4) networking. P6-B200 instances are powered by the AWS Nitro System, so you can reliably and securely scale AI workloads within Amazon EC2 UltraClusters to tens of thousands of GPUs.
P6-B200 instances are now available in the p6-b200.48xlarge size through Amazon EC2 Capacity Blocks for ML in the following AWS Region: US West (Oregon).
To learn more about P6-B200 instances, visit Amazon EC2 P6 instances.
AWS Transform for VMware is now generally available
At re:Invent 2024, AWS introduced the preview of Amazon Q Developer transformation capabilities for VMware. That innovation has evolved into AWS Transform for VMware—a first-of-its-kind agentic AI service that’s now generally available. Powered by large language models, graph neural networks, and the deep experience of AWS in enterprise workload migrations, AWS Transform simplifies VMware modernization at scale. Customers and partners can now move faster, reduce migration risk, and modernize with confidence.\n VMware environments have long been foundational to enterprise IT, but rising costs and vendor uncertainty are prompting organizations to rethink their strategies. Despite the urgency, VMware workload migration has historically been slow and error-prone. AWS Transform changes that. With agentic AI, AWS Transform automates the full modernization lifecycle—from discovery and dependency mapping to network translation and Amazon Elastic Compute Cloud (Amazon EC2) optimization. Certain tasks that once took weeks can now be completed in minutes. In testing, AWS generated migration wave plans for 500 VMs in just 15 minutes and performed networking translations up to 80x faster than traditional methods. Partners in pilot programs have cut execution times by up to 90%.
Beyond speed, AWS Transform delivers precision and transparency. A shared workspace brings together infrastructure teams, app owners, partners, and AWS experts to resolve blockers and maintain alignment. Built-in human-in-the-loop controls confirm all artifacts are validated before execution. As enterprises aim to break free from legacy constraints and tap into the value of their data, AWS Transform offers a streamlined path to modern, cloud-native architectures. Customers can seamlessly integrate with 200+ AWS services—including analytics, serverless, and generative AI—to accelerate innovation and reduce long-term costs.
Start your VMware modernization journey with AWS Transform. Read the launch blog, explore the documentation, register for the launch webinar, or check out the interactive demo.
PostgreSQL 18 Beta 1 is now available in Amazon RDS Database Preview Environment
Amazon RDS for PostgreSQL 18 Beta 1 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 18 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 18 Beta 1 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.\n PostgreSQL 18 includes significant updates to query execution and I/O operations. Query execution is enhanced with “skip scan” support for multicolumn B-tree indexes and optimized WHERE clause handling for OR and IN (…) conditions. Parallel execution capabilities are expanded through parallel GIN index builds and enhanced join operations. Observability improvements include detailed buffer access statistics in EXPLAIN ANALYZE and enhanced I/O utilization monitoring capabilities. Please refer to the PostgreSQL community announcement for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
Amazon WorkSpaces Pools now supports AlwaysOn running mode
Amazon Web Services announces the availability of AlwaysOn running mode for WorkSpaces Pools, designed for customers who want their streaming to start right away. With AlwaysOn mode, users will have their virtual desktop session provisioned in seconds, allowing them to be productive immediately. Customers can now choose between AlwaysOn running mode and the currently available AutoStop mode, which only bills an hourly usage fee when a customer logs into their session. With AutoStop, streaming starts after a short amount of start-up time, but customers can better optimize on cost for unused instances.\n Amazon WorkSpaces Pools enables customers to reduce costs by sharing a pool of virtual desktops across a group of users who get a fresh desktop every time they log in. With application settings being saved in a central storage repository, simplified management via a single console and set of clients, the ability to support Microsoft 365 Apps for enterprise, and the new running mode options, WorkSpaces Pools offer the flexibility customers expect. AlwaysOn for WorkSpaces Pools is now available in all regions where WorkSpaces Pools is supported. For pricing information, visit Amazon WorkSpaces Pricing. To learn more about AlwaysOn for WorkSpaces Pools and to get started, view the documentation here.
AWS Blogs
AWS Japan Blog (Japanese)
AWS News Blog
- New Amazon EC2 P6-B200 instances powered by NVIDIA Blackwell GPUs to accelerate AI innovations
- Accelerate CI/CD pipelines with the new AWS CodeBuild Docker Server capability
- Accelerate the modernization of Mainframe and VMware workloads with AWS Transform
- AWS Transform for .NET, the first agentic AI service for modernizing .NET applications at scale
AWS Architecture Blog
AWS Big Data Blog
AWS Compute Blog
AWS DevOps & Developer Productivity Blog
AWS HPC Blog
AWS for Industries
- Harnessing the power of pLTE and AWS Cloud to optimize AMI 2.0 outcomes
- Generative AI powered Virtual Data Rooms for Energy
AWS Machine Learning Blog
- How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod
- How Qualtrics built Socrates: An AI platform powered by Amazon SageMaker and Amazon Bedrock
- Vxceed secures transport operations with Amazon Bedrock