12/19/2025, 12:00:00 AM ~ 12/22/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon Application Recovery Controller region switch now supports three new capabilities
Amazon Application Recovery Controller (ARC) Region switch allows you to orchestrate the specific steps to switch your multi-Region applications to operate out of another AWS Region and achieve a bounded recovery time in the event of a Regional impairment to your applications. Region switch saves hours of engineering effort and eliminates the operational overhead previously required to complete failover steps, create custom dashboards, and manually gather evidence of a successful recovery for applications across your organization and hosted in multiple AWS accounts. Today, we are announcing three new Region switch capabilities:\n AWS GovCloud (US) support: ARC Region switch is now generally available in AWS GovCloud (US-East and US-West) Regions. Plan execution reports: Region switch now automatically generates a comprehensive report from each plan execution and saves it to an Amazon S3 bucket of your choice. Each report includes a detailed timeline of events for the recovery operation, resources in scope for the Region switch, alarm states for optional application status alarms, and recovery time objective (RTO) calculations. This eliminates the manual effort previously required to compile evidence and documentation for compliance officers and auditors. DocumentDB global cluster execution blocks: Adding to the catalog of 9 execution blocks, Region switch now supports Amazon DocumentDB global cluster execution blocks for automated multi-Region database recovery. This feature allows you to orchestrate DocumentDB global cluster failover and switchover operations within your Region switch plans. To get started, build a Region switch plan using the ARC console, API, or CLI. See the AWS Regional Services List for availability information. Visit our home page or read the documentation.
AWS Private CA OCSP now available in China and AWS GovCloud (US) Regions
AWS Private Certificate Authority (AWS Private CA) now supports Online Certificate Status Protocol (OCSP) in China and AWS GovCloud (US) Regions. AWS Private CA is a fully managed certificate authority service that makes it easy to create and manage private certificates for your organization without the operational overhead of running your own CA infrastructure. OCSP enables real-time certificate validation, allowing applications to check the revocation status of individual certificates on-demand rather than downloading Certificate Revocation List (CRL) files.\n With OCSP support, customers in these Regions can implement more efficient certificate validation with minimal bandwidth, typically requiring a few hundred bytes per query, versus downloading large Certificate Revocation Lists (CRLs) that can be hundreds of kilobytes or larger. This enables real-time revocation checks for use cases such as validating internal microservices communications, implementing zero trust security architectures, and authenticating IoT devices. AWS Private CA fully manages the OCSP responder infrastructure, providing high availability without requiring you to deploy or maintain OCSP servers. OCSP is now also available in the following AWS Regions: China (Beijing), and China (Ningxia), AWS GovCloud (US-East), AWS GovCloud (US-West). To enable OCSP for your certificate authorities, use the AWS Private CA console, AWS CLI, or API. To learn more about OCSP, see Certificate Revocation in the AWS Private CA User Guide. For pricing information, visit the AWS Private CA pricing page.
Amazon SageMaker Studio now supports SOCI indexing for faster container startup times
Today, AWS announces SOCI (Seekable Open Container Initiative) indexing support for Amazon SageMaker Studio, reducing container startup times by 30-50% when using custom images. Amazon SageMaker Studio is a fully integrated, browser-based environment for end-to-end machine learning development. SageMaker Studio provides pre-built container images for popular ML frameworks like TensorFlow, PyTorch, and Scikit-learn that enable quick environment setup. However, when data scientists need to tailor environments for specific use cases with additional libraries, dependencies, or configurations, they can build and register custom container images with pre-configured components to ensure consistency across projects. As ML workloads become increasingly complex, these custom container images have grown in size, leading to startup times of several minutes that create a bottlenecks in iterative ML development where quick experimentation and rapid prototyping are essential.\n SOCI indexing addresses this challenge by enabling lazy loading of container images, downloading only the necessary components to start applications with additional files loaded on-demand as needed. Instead of waiting several minutes for complete custom image downloads, users can begin productive work in seconds while the environment completes initialization in the background. To use SOCI indexing, create a SOCI index for your custom container image using tools like Finch CLI, nerdctl, or Docker with SOCI CLI, push the indexed image to Amazon Elastic Container Registry (ECR), and reference the image index URI when creating SageMaker Image resources. SOCI indexing is available in all AWS Regions where Amazon SageMaker Studio is available. To learn more about implementing SOCI indexing for your SageMaker Studio custom images, see Bring your own SageMaker image in the Amazon SageMaker Developer Guide.
Amazon RDS enhances observability for snapshot exports to Amazon S3
Amazon Relational Database Service (RDS) now offers enhanced observability for your snapshot exports to Amazon S3, providing detailed insights into export progress, failures, and performance for each task. These notifications enable you to monitor your exports with greater granularity and enables more predictability.\n With snapshot export to S3, you can export data from your RDS database snapshots to Apache Parquet format in your Amazon S3 bucket. This launch introduces four new event types, including current export progress and table-level notifications for long-running tables, providing more granular visibility into your snapshot export performance and recommendations for troubleshooting export operation issues. Additionally, you can view export progress, such as the number of tables exported and pending, along with exported data sizes, enabling you to better plan your operations and workflows. You can subscribe to these events through Amazon Simple Notification Service (SNS) to receive notifications and view the export events through the AWS Management Console, AWS CLI, or SDK. This feature is available for RDS PostgreSQL, RDS MySQL, and RDS MariaDB engines in all Commercial Regions where RDS is generally available. To learn more about the new event types, see Event categories in RDS.
Amazon Bedrock Data Automation launches instruction optimization for your document blueprints
Amazon Bedrock Data Automation (BDA) now supports blueprint instruction optimization, enabling you to improve the accuracy of your custom field extraction using just a few example document assets with ground truth labels. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. Blueprint instruction optimization automatically refines the natural language instructions in your blueprints, helping you achieve production-ready accuracy in minutes without model training or fine-tuning.\n With blueprint instruction optimization, you can now bring up to 10 representative document assets from your production workload and provide the correct, expected values for each field. Blueprint instruction optimization analyzes the differences between your expected results and the Data Automation inference results, and then refines the natural language instructions to improve extraction accuracy across your examples. For your intelligent document processing applications, you can now improve the accuracy of extracting insights such as invoice line items, contract terms, tax form fields, or medical billing codes. After optimization completes, you receive detailed evaluation metrics including exact match rates and F1 scores measured against your ground truth, giving you confidence that your blueprint is ready for production deployment. Data Automation blueprint instruction optimization for documents is available in all AWS Regions where Amazon Bedrock Data Automation is supported. To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with blueprint instruction optimization, navigate to your blueprint in the Amazon Bedrock console, go to Data Automation, select your custom outputs for documents, and select Start Optimization.
Timestream for InfluxDB Now Supports Restart API Calls
Amazon Timestream for InfluxDB now offers a restart API for both InfluxDB versions 2 and 3. This new capability enables customers to trigger system restarts on their database instances directly through the AWS Management Console, API, or CLI, to streamline operational management of their time-series database environments.\n With the restart API, customers can perform resilience testing to validate their application’s behavior during database restarts and address health-related issues without requiring support intervention. This feature enhances operational flexibility for DevOps teams managing mission-critical workloads, allowing them to implement more comprehensive testing strategies and respond faster to performance concerns by providing direct control over database instance lifecycle operations. Amazon Timestream for InfluxDB restart capability is available in all Regions where Timestream for InfluxDB is offered. To get started with Amazon Timestream for InfluxDB 3, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.
Announcing Cost Allocation Tags support for Account Tags
AWS announces Cost Allocation tags support for account tags across AWS Cost Management products, enabling customers with multiple member accounts to utilize their existing AWS Organizations account tags directly in cost management tools. Account tags are applied at the account level in AWS Organizations and automatically apply to all metered usage within tagged accounts, eliminating the need to manually configure and maintain separate account groupings in AWS Cost Explorer, Cost and Usage Reports, AWS Budgets, and Cost Categories.\n With account tag support, customers can analyze costs by account tag directly in Cost Explorer and Cost and Usage Reports (CUR 2.0 and FOCUS). Customers can set up AWS Budgets and AWS Cost Anomaly Detection alerts on groups of accounts without configuring lists of account IDs. Customers can also build complex cost categories on top of account tags for further categorization. Account tags enable cost allocation for untaggable resources including refunds, credits, and certain service charges that cannot be tagged at the resource level. When new accounts join the organization or existing accounts are removed, customers simply add or update relevant tags, and the changes automatically apply across all cost management products. To get started, customers apply tags to accounts in the AWS Organizations console, then activate those account tags from the Cost Allocation Tags page in the Billing and Cost Management console. This feature is generally available in all AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To learn more, see organizing and tracking costs using AWS cost allocation tags.
Amazon ECR now supports creating repositories on push
Amazon Elastic Container Registry (ECR) now supports automatic repository creation on image push. This new capability simplifies container workflows by having ECR automatically create repositories if they don’t exist when an image is pushed, without customers having to pre-create repositories before pushing container images. Now when customers push images, ECR will automatically create repositories according to defined repository creation template settings.\n Create on push is available in all AWS commercial and AWS GovCloud (US) Regions. To learn more about repository creation templates, please visit our documentation. You can learn more about storing, managing and deploying container images and artifacts with Amazon ECR, including how to get started, from our product page and user guide.
Amazon WorkSpaces Applications now supports Microsoft Windows Server 2025
Amazon WorkSpaces Applications now offers images powered by Microsoft Windows Server 2025, enabling customers to launch streaming instances with the latest features and enhancements from Microsoft’s newest server operating system. This update ensures your application streaming environment benefits from improved security, performance, and modern capabilities.\n With Windows Server 2025 support, you can deliver the Microsoft Windows 11 desktop experience to your end users, giving you greater flexibility in choosing the right operating system for your specific application and desktop streaming needs. Whether you’re running business-critical applications or providing remote access to specialized software, you now have expanded options to align your infrastructure decisions with your unique workload requirements and organizational standards. You can select from AWS-provided public images or create custom images tailored to your requirements using Image Builder. Support for Microsoft Windows Server 2025 is now generally available in all AWS Regions where Amazon WorkSpaces Applications is offered. To get started with Microsoft Windows Server 2025 images, visit the Amazon WorkSpaces Applications documentation. For pricing details, see the Amazon WorkSpaces Applications Pricing page.
Amazon Redshift ODBC 2.x Driver now supports Apple macOS
Amazon Redshift ODBC 2.x driver now supports Apple macOS, expanding platform compatibility for developers and analysts. This enhancement allows Apple macOS users to connect to Amazon Redshift clusters using the latest Amazon Redshift ODBC 2.x driver version. You can use an ODBC connection to connect to your Amazon Redshift cluster from many third-party SQL client tools and applications.\n The Amazon Redshift ODBC 2.x native driver support enables you to access Amazon Redshift features such as data sharing write capabilities and Amazon IAM Identity Center integration - features that are only available through Amazon Redshift drivers. This native Apple macOS support enables seamless integration with Extract, Transform, Load (ETL) and Business Intelligence (BI) tools, allowing you to use Apple macOS while accessing the full suite of Amazon Redshift capabilities. We recommend that you upgrade to the latest Amazon Redshift ODBC 2.x driver version to access new features. For installation instructions and system requirements, please see the Amazon Redshift ODBC 2.x driver documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- AWS Weekly Roundup: Amazon ECS, Amazon CloudWatch, Amazon Cognito, etc. (December 15, 2025)
- Accelerate AI development using Amazon SageMaker AI with serverless MLflow
- Amazon FSx for NetApp ONTAP has been integrated with Amazon S3 to enable seamless data access
- [EdTech Meetup] EdTech in the AI Era ~Product/Development/Operation Changes and the Future of EdTech ~ [Event Report]
- Amazon FSx for NetApp ONTAP as block storage