10/15/2025, 12:00:00 AM ~ 10/16/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon WorkSpaces Core Managed Instances is now available in 5 additional AWS Regions
AWS today announced Amazon WorkSpaces Core Managed Instances availability in US East (Ohio), Asia Pacific (Malaysia), Asia Pacific (Hong Kong), Middle East (UAE), and Europe (Spain), bringing Amazon WorkSpaces capabilities to these AWS Regions for the first time. WorkSpaces Core Managed Instances in these Regions is supported by partners including Citrix, Workspot, Leostream, and Dizzion.\n Amazon WorkSpaces Core Managed Instances simplifies virtual desktop infrastructure (VDI) migrations with highly customizable instance configurations. WorkSpaces Core Managed Instances provisions resources in your AWS account, handling infrastructure lifecycle management for both persistent and non-persistent workloads. Managed Instances provide flexibility for organizations requiring specific compute, memory, or graphics configurations. With WorkSpaces Core Managed Instances, you can use existing discounts, Savings Plans, and other features like On-Demand Capacity Reservations (ODCRs), with the operational simplicity of WorkSpaces - all within the security and governance boundaries of your AWS account. This solution is ideal for organizations migrating from on-premises VDI environments or existing AWS customers seeking enhanced cost optimization without sacrificing control over their infrastructure configurations. You can use a broad selection of instance types, including accelerated graphics instances, while your Core partner solution handles desktop and application provisioning and session management through familiar administrative tools. Customers will incur standard compute costs along with an hourly fee for WorkSpaces Core. See the WorkSpaces Core pricing page for more information. To learn more about Amazon WorkSpaces Core Managed Instances, visit the product page. For technical documentation and getting started guides, see the Amazon WorkSpaces Core Documentation.
AWS SAM CLI adds Finch support, expanding local development tool options for serverless applications
AWS Serverless Application Model Command Line Interface (SAM CLI) now supports Finch as an alternative to Docker for local development and testing of serverless applications. This gives developers greater flexibility in choosing their preferred local development environment when working with SAM CLI to build and test their serverless applications.\n Developers building serverless applications spend significant time in their local development environments. SAM CLI is a command-line tool for local development and testing of serverless applications. It allows you to build, test, debug, and package your serverless applications locally before deploying to AWS Cloud. To provide the local development and testing environment for your applications, SAM CLI uses a tool that can run containers on your local device. Previously, SAM CLI only supported Docker as the tool for running containers locally. Starting today, SAM CLI also supports Finch as a container development tool. Finch is an open-source tool, developed and supported by AWS, for local container development. This means you can now choose between Docker and Finch as your preferred container tool for local development when working with SAM CLI. You can use SAM CLI to invoke Lambda functions locally, test API endpoints, and debug your serverless applications with the same experience you would have in the AWS Cloud. With Finch support, SAM CLI now automatically detects and uses Finch as the container development tool when Docker is not available. You can also set Finch as your preferred container tool for SAM CLI. This new feature supports all core SAM CLI commands including sam build, sam local invoke, sam local start-api, and sam local start-lambda. To learn more about using SAM CLI with Finch, visit the SAM CLI developer guide.
Claude 4.5 Haiku by Anthropic now in Amazon Bedrock
Claude Haiku 4.5 is now available in Amazon Bedrock. Claude Haiku 4.5 delivers near-frontier performance matching Claude Sonnet 4’s capabilities in coding, computer use, and agent tasks at substantially lower cost and faster speeds, making state-of-the-art AI accessible for scaled deployments and budget-conscious applications.\n The model’s enhanced speed makes it ideal for latency-sensitive applications like real-time customer service agents and chatbots where response time is critical. For computer use tasks, Haiku 4.5 delivers significant performance improvements over previous models, enabling faster and more responsive applications. This model supports vision and unlocks new use cases where customers previously had to choose between performance and cost. It enables economically viable agent experiences, supports multi-agent systems for complex coding projects, and powers large-scale financial analysis and research applications. Haiku 4.5 maintains Claude’s unique character while delivering the performance and efficiency needed for production deployments. Claude Haiku 4.5 is now available in Amazon Bedrock via global cross region inference in multiple locations. To view the full list of available regions, refer to the documentation. To get started with Haiku 4.5 in Amazon Bedrock visit the Amazon Bedrock console, Anthropic’s Claude in Amazon Bedrock product page, and the Amazon Bedrock pricing page.
Second-generation AWS Outposts racks now supported in the AWS Europe (Ireland) Region
Second-generation AWS Outposts racks are now supported in the AWS Europe (Ireland) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience.\n Organizations from startups to enterprises and the public sector in and outside of Europe can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.
AWS Backup enhances backup plan management with schedule preview
AWS Backup now provides schedule preview for backup plans, helping you validate when your backups are scheduled to run. Schedule preview shows the next ten scheduled backup runs, including when continuous backup, indexing, or copy settings take effect.\n Backup plan schedule preview consolidates all backup rules into a single timeline, showing how they work together. You can see when each backup occurs across all backup rules, along with settings like lifecycle to cold storage, point-in-time recovery, and indexing. This unified view helps you quickly identify and resolve conflicts or gaps between your backup strategy and actual configuration.
Backup plan schedule preview is available in all AWS Regions where AWS Backup is available. You can start using this feature automatically from the AWS Backup console, API, or CLI without any additional settings. For more information, visit our documentation.
AWS Step Functions now supports Diagnose with Amazon Q
AWS announces AI-powered troubleshooting capabilities with Amazon Q integration in AWS Step Functions console. AWS Step Functions is a visual workflow service that enables customers to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. This integration brings Amazon Q’s intelligent error analysis directly into AWS Step Functions console, helping you quickly identify and resolve workflow issues.\n When errors occur in your AWS Step Functions workflows, you can now click the “Diagnose with Amazon Q” button that appears in error alerts and the console error notification area to receive AI-assisted troubleshooting guidance. This feature helps you resolve common types of issues including state machine execution failures as well as Amazon States Language (ASL) syntax errors and warnings. The troubleshooting recommendations appear in a dedicated window with remediation steps tailored to your error context, enabling faster resolution and improved operational efficiency. Diagnose with Amazon Q for AWS Step Functions is available in all commercial AWS Regions where Amazon Q is available. The feature is automatically enabled for customers who have access to Amazon Q in their region. To learn more about Diagnose with Amazon Q, see Diagnosing and troubleshooting console errors with Amazon Q or get started by visiting the AWS Step Functions console.
Amazon Bedrock simplifies access with automatic enablement of serverless foundation models
Amazon Bedrock now provides immediate access to all serverless foundation models by default for users in all commercial AWS regions. This update eliminates the need for manually activating model access, allowing you to instantly start using these models through the Amazon Bedrock console playground, AWS SDK, and Amazon Bedrock features including Agents, Flows, Guardrails, Knowledge Bases, Prompt Management, and Evaluations.\n While you can quickly begin using serverless foundation models from most providers, Anthropic models, although enabled by default, still require you to submit a one-time usage form before first use. You can complete this form either through the API or through the Amazon Bedrock console by selecting an Anthropic model from the playground. When completed through the AWS organization management account, the form submission automatically enables Anthropic models across all member accounts in the organization. This simplified access is available across all commercial AWS regions where Amazon Bedrock is supported. Account administrators retain full control over model access through IAM policies and Service Control Policies (SCPs) to restrict access as needed. For implementation guidance and examples on access controls, please refer to our blog.
DeepSeek, OpenAI, and Qwen models available in Amazon Bedrock in additional Regions
Amazon Bedrock is bringing DeepSeek-V3.1, OpenAI open-weight models, and Qwen3 models to more AWS Regions worldwide, expanding access to cutting-edge AI for customers across the globe. This regional expansion enables organizations in more countries and territories to deploy these powerful foundation models locally, ensuring compliance with data residency requirements, reducing network latency, and delivering faster AI-powered experiences to their users.\n DeepSeek-V3.1 and Qwen3 Coder-480B are now available in the US East (Ohio) and Asia Pacific (Jakarta) AWS Regions. OpenAI open-weight models (20B, 120B) and Qwen3 models (32B, 235B, Coder-30B) are now available in the US East (Ohio), Europe (Frankfurt), and Asia Pacific (Jakarta) AWS Regions. Check out the full Region list for future updates. To learn more about these models visit the Amazon Bedrock product page. To get started, access the Amazon Bedrock console and view the documentation.
Amazon Aurora PostgreSQL zero-ETL integration with Amazon SageMaker is now available
Amazon Aurora PostgreSQL-Compatible Edition now supports zero-ETL integration with Amazon SageMaker, enabling near real-time data availability for analytics workloads. This integration automatically extracts and loads data from PostgreSQL tables into your lakehouse where it’s immediately accessible through various analytics engines and machine learning tools. The data synced into the lakehouse is compatible with Apache Iceberg open standards, enabling you to use your preferred analytics tools and query engines such as SQL, Apache Spark, BI, and AI/ML tools.\n Through a simple no-code interface, you can create and maintain an up-to-date replica of your PostgreSQL data in your lakehouse without impacting production workloads. The integration features comprehensive, fine-grained access controls that are consistently enforced across all analytics tools and engines, ensuring secure data sharing throughout your organization. As a complement to the existing zero-ETL integrations with Amazon Redshift, this solution reduces operational complexity while enabling you to derive immediate insights from your operational data. Amazon Aurora PostgreSQL zero-ETL integration with Amazon SageMaker is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Stockholm) AWS Regions. To learn more, visit What is zero-ETL. To begin using this new integration, visit the zero-ETL documentation for Aurora PostgreSQL.
Amazon EC2 R8g instances now available in additional regions
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in South America (Sao Paulo), Europe (London), and Asia Pacific (Melbourne) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.\n AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Amazon ECS supports running Firelens as a non-root user
Amazon Elastic Container Services (Amazon ECS) now allows you to run Firelens containers as a non-root user, by specifying a User ID in your Task Definition.\n Specifying a non-root user with a specific user ID reduces the potential attack footprint by users who may gain access to such software, a security best practice and a compliance requirement by some industries and security services such as the AWS Security Hub. With this release, Amazon ECS allows you to specify a user ID in the “user” field of your Firelens containerDefinition element of your Task Definition, instead of only allowing “user”: “0” (root user). The new capability is supported in all AWS Regions. See the documentation for using Firelens for more details on how to set up your Firelens container to run as non-root.
AWS Backup expands information in job APIs and Backup Audit Manager reports
AWS Backup now provides more details in backup job API responses and Backup Audit Manager reports to give you better visibility into backup configurations and compliance settings. You can verify your backup policies with a single API call.\n List and Describe APIs for backup, copy, and restore jobs now return fields that required multiple API calls before. Delegated administrators can now view backup job details across their organization. Backup jobs APIs include retention settings, vault lock status, encryption details, and backup plan information like plan names, rule names, and schedules. Copy job APIs return destination vault configurations, vault type, lock state, and encryption settings. Restore job APIs show source resource details and vault access policies. Backup Audit Manager reports include new columns with vault type, lock status, encryption details, archive settings, and retention periods. You can use this information to enhance audit trails and verify compliance with data protection policies.
These expanded information fields are available today in all AWS Regions where AWS Backup and AWS Backup Audit Manager are supported, with no additional charges.
To learn more about AWS Backup Audit Manager, visit the product page and documentation. To get started, visit the AWS Backup console.
AWS Application Load Balancer launches URL and Host Header Rewrite
Amazon Web Services (AWS) announces URL and Host Header rewrite capabilities for Application Load Balancer (ALB). This feature enables customers to modify request URLs and Host Headers using regex-based pattern matching before routing requests to targets.\n With URL and Host Header rewrites, you can transform URLs using regex patterns (e.g., rewrite “/api/v1/users” to “/users”), standardize URL patterns across different applications, modify Host Headers for internal service routing, remove or add URL path prefixes, and redirect legacy URL structures to new formats. This capability eliminates the need for additional proxy layers and simplifies application architectures. The feature is valuable for microservices deployments where maintaining a single external hostname while routing to different internal services is critical.
You can configure URL and Host Header rewrites through the AWS Management Console, AWS CLI, AWS SDKs, and AWS APIs. There are no additional charges for using URL and Host Header rewrites. You pay only for your use of Application Load Balancer based on Application Load Balancer pricing.
This feature is now available in all AWS commercial regions.
To learn more, visit the ALB Documentation, and the AWS Blog post on URL and Host Header rewrites with Application Load Balancer.
Amazon RDS for MySQL and Amazon RDS for PostgreSQL zero-ETL integration with Amazon Redshift is now available in the Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Canada West (Calgary), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), and Middle East (UAE) regions. Zero-ETL integrations enable near real-time analytics and machine learning (ML) on petabytes of transactional data using Amazon Redshift. Within seconds of data being written to Amazon RDS for MySQL or Amazon RDS for PostgreSQL, the data is replicated to Amazon Redshift. \n You can create multiple zero-ETL integrations from a single Amazon RDS database, and you can apply data filtering for each integration to include or exclude specific databases and tables, tailoring the zero-ETL integration to your needs. You can also use AWS CloudFormation to automate the configuration and deployment of resources needed for zero-ETL integrations. To learn more about zero-ETL and how to get started, visit the documentation for Amazon RDS and Amazon Redshift.
Amazon Kinesis Data Streams announces new Fault Injection Service (FIS) actions for API errors
Amazon Kinesis Data Streams now supports Fault Injection Service (FIS) actions for Kinesis API errors. Customers can now test their application’s error handling capabilities, retry mechanisms (such as exponential backoff patterns), and CloudWatch alarms in a controlled environment. This allows customers to validate their monitoring systems and recovery processes before encountering real-world failures, ultimately improving application resilience and availability. This integration supports Kinesis Data Streams API errors including throttling, internal errors, service unavailable, and expired iterator exceptions for Amazon Kinesis Data Streams.\n Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store real-time data streams at any scale. Now customers can create real-world Kinesis Data Stream API errors (including 500, 503, and 400 errors for GET and PUT operations) to test application resilience. This feature eliminates the previous need for custom implementation or to wait for actual production failures to verify error handling mechanisms. To get started, customers can create experiment templates through the FIS console to run tests directly or integrate them into their continuous integration pipeline. For additional safety, FIS experiments include automatic stop mechanisms that trigger when customer-defined thresholds are reached, ensuring controlled testing without risking application stability. These actions are generally available in all AWS Regions where FIS is available, including the AWS GovCloud (US) Regions. To learn more about using these actions, please see the Kinesis Data Streams User Guide and FIS User Guide.
Amazon MSK adds support for Apache Kafka version 4.1
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 4.1, introducing Queues as a preview feature, a new Streams Rebalance Protocol in early access, and Eligible Leader Replicas (ELR). Along with these features, Apache Kafka version 4.1 includes various bug fixes and improvements. For more details, please refer to the Apache Kafka release notes for version 4.1. \n A key highlight of Kafka 4.1 is the introduction of Queues as a preview feature. Customers can use multiple consumers to process messages from the same topic partitions, improving parallelism and throughput for workloads that need point-to-point message delivery. The new Streams Rebalance Protocol builds upon Kafka 4.0’s consumer rebalance protocol, extending broker coordination capabilities to Kafka Streams for optimized task assignments and rebalancing. Additionally, ELR is now enabled by default to strengthen availability. To start using Apache Kafka 4.1 on Amazon MSK, simply select version 4.1.x when creating a new cluster via the AWS Management Console, AWS CLI, or AWS SDKs. You can also upgrade existing MSK provisioned clusters with an in-place rolling update. Amazon MSK orchestrates broker restarts to maintain availability and protect your data during the upgrade. Kafka version 4.1 support is available today across all AWS regions where Amazon MSK is offered. To learn how to get started, see the Amazon MSK Developer Guide.
Amazon RDS for Oracle zero-ETL integration with Amazon Redshift is now available in the Asia Pacific (Hyderabad), Asia pacific (Jakarta), Asia Pacific (Melbourne), Canada West (Calgary), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), and Middle East(UAE) Regions. Amazon RDS for Oracle zero-ETL integration with Amazon Redshift enables near real-time analytics and machine learning (ML) to analyze petabytes of transactional data in Amazon Redshift without complex data pipelines for extract-transform-load (ETL) operations. Within seconds of data being written to an Amazon RDS for Oracle database instance, the data is replicated to Amazon Redshift. Zero-ETL integrations simplify the process of analyzing data from Amazon RDS for Oracle database instances, enabling you to derive holistic insights across multiple applications with ease. \n You can use the AWS management console, API, CLI, and AWS CloudFormation to create and manage zero-ETL integrations between RDS for Oracle and Amazon Redshift. If you use Oracle multitenant architecture, you can choose specific pluggable databases (PDBs) to selectively replicate them. In addition, you can choose specific tables and tailor replication to your needs. RDS for Oracle zero-ETL integration with Redshift is available with Oracle Database version 19c. To learn more, refer Amazon RDS and Amazon Redshift documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- Automated installation of the AWS Systems Manager agent on unmanaged Amazon EC2 nodes
- How I stopped worrying about README files
- Migrate and modernize VMware workloads using AWS Transform for VMware
- AWS Weekly — 2025/10/6
- Information on the release of materials and videos for the AWS Black Belt webinar in September 2025
- Amazon Bedrock AgentCore launches general availability in AWS regions including Tokyo: bringing AI agents to the real world
AWS Cloud Financial Management
AWS Cloud Operations Blog
AWS Big Data Blog
AWS Compute Blog
Containers
Artificial Intelligence
- Transforming enterprise operations: Four high-impact use cases with Amazon Nova
- Building smarter AI agents: AgentCore long-term memory deep dive
- Configure and verify a distributed training cluster with AWS Deep Learning Containers on Amazon EKS
- Scala development in Amazon SageMaker Studio with Almond kernel