9/18/2025, 12:00:00 AM ~ 9/19/2025, 12:00:00 AM (UTC)
Recent Announcements
Second-generation AWS Outposts racks are now supported in the AWS Canada (Central) and US West (N. California) Regions. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience.\n Organizations from startups to enterprises and the public sector in and outside of Canada and the US can now order their Outposts racks connected to these two new supported Regions, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low-latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to.
To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts racks FAQs page.
With this launch, Amazon VPC Reachability Analyzer and Amazon VPC Network Access Analyzer are now available in Asia Pacific (New Zealand), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Asia Pacific (Taipei), Canada West (Calgary), Israel (Tel Aviv) and Mexico (Central).\n VPC Reachability Analyzer allows you to diagnose network reachability between a source resource and a destination resource in your virtual private clouds (VPCs) by analyzing your network configurations. For example, Reachability Analyzer can help you identify a missing route table entry in your VPC route table that could be blocking network reachability between an EC2 instance in Account A that is not able to connect to another EC2 instance in Account B in your AWS Organization. VPC Network Access Analyzer allows you to identify unintended network access to your AWS resources, helping you meet your security and compliance guidelines. For example, you can create a scope to verify that all paths from your web-applications to the internet, traverse the firewall, and detect any paths that bypass the firewall. For more information, visit documentation for VPC Reachability Analyzer and VPC Network Access Analyzer. For pricing, refer to the Network Analysis tab on the Amazon VPC Pricing Page.
Amazon Q Developer CLI announces support for remote MCP servers
Amazon Q Developer CLI announces support for remote MCP servers. Remote MCP servers improve scalability and security of the tools you use within your development tasks. Not only does it reduce the use of compute resources by moving to a centralized server, it also helps you better manage access and security. You can now integrate with MCP servers such as Atlassian, and GitHub that support HTTP and support OAuth based authentication.\n To configure a remote MCP server, specify the transport type as HTTP, the URL where users will get authentication credentials, and any optional headers to include when making the request. You can configure remote MCP servers in your custom agent configuration or in mcp.json. When a CLI session is initiated, you will see the list of MCP servers to load and can query the list for the authentication URL. Once you successfully complete the authentication steps, Q Developer CLI will query the tools available from the MCP server and make it available to the agent. Remote MCP servers are available in Amazon Q Developer CLI and Amazon Q Developer IDE plugins. For more information, check out the documentation.
Amazon Kinesis Data Streams now allows customers to make API requests over Internet Protocol version 6 (IPv6) in the AWS GovCloud (US) Regions. Customers have the option of using either IPv6 or IPv4 when sending requests over dual-stack public or VPC endpoints. The new endpoints have also been validated under the Federal Information Processing Standard (FIPS) 140-3 program.\n Kinesis Data Streams allows users to capture, process, and store data streams in real time at any scale. IPv6 increases the number of available addresses by several orders of magnitude, so customers will no longer need to manage overlapping address spaces. Many devices and networks today already use IPv6, and now they can easily write to and read from data streams. FIPS-compliant endpoints help companies contracting with the US federal governments meet the FIPS security requirement to encrypt sensitive data in supported Regions. Support for IPv6 with Kinesis Data Streams is now available in all Regions where Kinesis Data Streams is available, including AWS GovCloud (US) and China Regions. See here for a full listing of our Regions. To learn more about Kinesis Data Streams, please refer to our Developer Guide.
Stability AI Image Services now available in Amazon Bedrock
Amazon Bedrock announces the availability of Stability AI Image Services, a comprehensive suite of 9 specialized image editing tools designed to accelerate professional creative workflows. Stability AI Image Services enable granular control over image editing with a range of tools designed to work with your creative process, allowing you to take a single concept from ideation to finished product with precision and flexibility.\n Stability AI Image Services offers two categories of image editing capabilities: Edit tools: Remove Background, Erase Object, Search and Replace, Search and Recolor, and Inpaint let you make targeted modifications to specific parts of your images. Control tools: Structure, Sketch, Style Guide, and Style Transfer give you powerful ways to generate variations based on existing images or sketches. Stability AI Image Services is now available in Amazon Bedrock through the API and is supported in US West (Oregon), US East (N. Virginia), and US East (Ohio). For more information on supported regions, visit the Amazon Bedrock Model Support by Regions guide. For more details about Stability AI Image Services and its capabilities, visit the Stability AI product page and Stability AI documentation page.
AWS Step Functions expands data source options and improves observability for Distributed Map
AWS Step Functions now supports additional data sources and new observability metrics for Distributed Map. AWS Step Functions is a visual workflow service capable of orchestrating over 14,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads. Distributed Map is a task state of Step Functions that runs the same process for multiple entries in a data set.\n With this update, Distributed Map now supports additional data inputs, so you can orchestrate large-scale analytics and ETL workflows. You can now process AWS Athena data manifest and Parquet files directly, iterate over S3 objects under a specified prefix using S3ListObjectsV2, read from JSON objects and natively extract array data from JSON object from Amazon S3 or state input, eliminating the need for custom pre-processing. You also now get visibility into your Distributed Map usage with the following metrics, such as Approximate Open Map Runs Count, Open Map Run Limit, and Approximate Map Runs Backlog Size. New input sources and improved observability for Distributed Map are available in all commercial AWS Regions where AWS Step Functions is available. To get started, you can use the Distributed Map mode today in the AWS Step Functions console. To learn more, visit the Step Functions developer guide.
Amazon Lex provides enhanced confirmation and currency built-in slots to 10 additional languages
Amazon Lex now provides support for confirmation and currency slot types in 10 additional languages: Portuguese, Catalan, French, Italian, German, Spanish, Mandarin, Cantonese, Japanese, and Korean. Built-in slots help you build more natural and efficient conversations by understanding synonyms of what you user says and resolving those inputs to a standard format. The confirmation slot helps understand various expressions of user acknowledgement and converts them into ‘Yes’, ‘No’, “Don’t know’‘, or ‘Maybe’. The currency slot helps identify currency and represents the input in a structured way. For example, when a user says “nope” or “absolutely not”, the confirmation slot resolves to ‘No’ or when the user says “1 dollar’, the currency slot resolves it to ”USD 1.00“. These built-in slots help you build more natural and efficient conversational experiences.\n This feature is available in all commercial AWS Regions where Amazon Lex operates. To learn more about these features, visit Amazon Lex documentation or to learn how Amazon Connect and Amazon Lex deliver cloud-based conversational AI experiences for contact centers, please visit the Amazon Connect website.
DeepSeek-V3.1 model now available fully managed in Amazon Bedrock
DeepSeek-V3.1 is now available as a fully managed foundation model in Amazon Bedrock. This advanced open weight model allows you to switch between thinking mode for detailed step-by-step analysis and non-thinking mode for quicker responses. With comprehensive multilingual support, it delivers enhanced accuracy and reduced hallucinations compared to previous DeepSeek models, while maintaining visibility into its decision-making process.\n You can use DeepSeek-V3.1’s enterprise-grade capabilities across critical business functions, from state-of-the-art software development to complex mathematical reasoning and data analysis. The model excels at sophisticated problem-solving tasks, demonstrating strong performance in coding benchmarks and technical challenges. Its enhanced tool-calling capabilities and seamless workflow integration make it ideal for building AI agents and automating enterprise processes, while its transparent reasoning approach helps teams understand and trust its outputs.
DeepSeek-V3.1 is now available in the US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Europe (London), and Europe (Stockholm) AWS Regions. To learn more, read the blog, product page, Amazon Bedrock pricing, and documentation. To get started with DeepSeek in Amazon Bedrock, visit the Amazon Bedrock console.
Qwen3 models are now available fully managed in Amazon Bedrock
Amazon Bedrock continues to expand model choice by adding four Qwen3 open weight foundation models, now available as fully managed, serverless offerings. The lineup includes: Qwen3-Coder-480B-A35B-Instruct, Qwen3-Coder-30B-A3B-Instruct, Qwen3-235B-A22B-Instruct-2507, and Qwen3-32B for efficient dense computation. These models feature both dense and Mixture-of-Experts (MoE) architectures, providing flexible options for various development needs.\n These open weight models enable you to build powerful AI applications with advanced agentic capabilities, without managing any infrastructure. The two Qwen3-Coder models excel at agentic coding and complex software engineering tasks, offering state-of-the-art performance for function calling and tool use. The 235B model delivers efficient general reasoning and instruction following across diverse tasks, while the 32B dense model provides a more traditional architecture suitable for a wide range of computational tasks. Qwen3 models (32B, Coder-30B) are available today in the US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Tokyo), Europe (Ireland, London, Milan, Stockholm), and South America (São Paulo) AWS Regions. Qwen 235B is available today in theUS West (Oregon), Asia Pacific (Mumbai, Tokyo), and Europe (London, Milan, Stockholm) AWS Regions. Qwen Coder-480B is available today in the US West (Oregon), Asia Pacific (Mumbai, Tokyo), and Europe (London, Stockholm) AWS Regions. Check the full Region list for future updates. To learn more, read the blog, product page, Amazon Bedrock pricing, and documentation. To get started with Qwen in Amazon Bedrock, visit the Amazon Bedrock console.
OpenAI open weight models expand to new regions on AWS Bedrock
Today, AWS announces the expansion of OpenAI open weight models on AWS Bedrock to eight new regions. This expansion brings these powerful AI models closer to customers in various parts of the world, enabling lower latency and improved performance for a wide range of AI-powered applications.\n With this expansion, the OpenAI open weight models are now available in the following AWS Regions: US East (N. Virginia), Asia Pacific (Tokyo), Europe (Stockholm), Asia Pacific (Mumbai), Europe (Ireland), South America (São Paulo), Europe (London), and Europe (Milan), in addition to the previously supported region of US West (Oregon). This broader availability allows more customers to leverage these state-of-the-art AI models while keeping their data within their preferred geographic locations, helping to address data residency requirements and reduce network latency. To learn more about OpenAI open weight models on AWS Bedrock and how to get started, visit the Amazon Bedrock console or check out our documentation. For more information about the initial release of these models on AWS Bedrock, refer to our previous blog post.
Amazon SageMaker HyperPod now supports autoscaling using Karpenter
Amazon SageMaker HyperPod now supports managed node autoscaling using Karpenter, enabling customers to automatically scale their clusters to meet dynamic inference and training demands. Real-time inference workloads require automatic scaling to address unpredictable traffic patterns and maintain service level agreements, while optimizing costs. However, organizations often struggle with the operational overhead of installing, configuring, and maintaining complex autoscaling solutions. HyperPod-managed node autoscaling eliminates the undifferentiated heavy lifting of Karpenter setup and maintenance, while providing integrated resilience and fault tolerance capabilities.\n Autoscaling on HyperPod with Karpenter enables customers to achieve just-in-time provisioning that rapidly adapts GPU compute for inference traffic spikes. Customers can scale to zero nodes during low-demand periods without maintaining dedicated controller infrastructure and benefit from workload-aware node selection that optimizes instance types and costs. For inference workloads, this provides automatic capacity scaling to handle production traffic bursts, cost reduction through intelligent node consolidation during idle periods, and seamless integration with event-driven pod autoscalers like KEDA. Training workloads also benefit from automatic resource optimization during model development cycles. You can enable autoscaling on HyperPod using the UpdateCluster API with AutoScaling mode set to “Enable” and AutoScalerType set to “Karpenter”. This feature is available in all AWS Regions where Amazon SageMaker HyperPod EKS clusters are supported. To learn more about autoscaling on SageMaker HyperPod with Karpenter, see the user guide and blog.
Amazon EVS now supports HCX migration over public internet
Amazon Elastic VMware Service (Amazon EVS) now allows you to securely migrate and stretch your layer 2 networks from your on-premises data centers to Amazon EVS environments over the public internet. This launch adds to the existing capability of migrating workloads to Amazon EVS through dedicated private connectivity such as AWS Direct Connect or Virtual Private Networks (VPNs).\n By enabling internet connectivity, Amazon EVS uses Elastic IP Addresses (EIPs) to provide a stable endpoint and a faster setup for performing workload migrations using VMware HCX. You now have the option to use internet connectivity for your migration projects when you might not have access to private connectivity options or are looking for a cost-effective alternative for the type of applications or projects that do not require the high network performance characteristics of a private connection during the migration.
Public HCX connectivity is available in all AWS Regions where Amazon EVS is available. To learn more about EVS migration options with HCX visit the AWS User Guide. To learn more about Amazon EVS, visit the product detail page.
AWS Step Functions now supports IPv6 with dual-stack endpoints
AWS Step Functions adds supports for IPv6. You can now send IPV6 traffic to AWS Step Functions via new dual-stack IPv4 and IPv6 endpoints. AWS Step Functions is a visual workflow service that enables customers to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. This enhancement addresses the growing need for IP addresses as the internet continues to expand, providing a larger address space than the traditional IPv4 format.\n With IPv6 support, organizations modernizing their applications can now build serverless workflows without being constrained by limited IPv4 address space. The new dual-stack endpoints support both IPv4 and IPv6 protocols while maintaining backwards compatibility with existing IPv4 endpoints. Step Functions also supports IPv6 connectivity through PrivateLink interface Virtual Private Cloud (VPC) endpoints, enabling you to access the service privately without traversing the public internet. This enables organizations operating in IPv6 environments to natively integrate with Step Functions without requiring complex translation mechanisms between IPv6 and IPv4. IPv6 support for AWS Step Functions is now generally available in US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California) as well as AWS GovCloud (US-East), and AWS GovCloud (US-West) Regions, where AWS Step Functions is available. To learn more about IPv6 support on AWS, visit the documentation page.
Amazon OpenSearch Serverless now supports Disk-Optimized Vectors
We are excited to announce the launch of disk-optimized vector support for Amazon OpenSearch Serverless, offering customers a cost-effective solution for vector search operations without compromising on accuracy and recall rates. This new feature enables organizations to implement high-quality vector search capabilities while significantly reducing operational costs.\n With the introduction of Disk Optimized Vectors, customers can now choose between memory-optimized and disk-optimized vector storage options. The disk-optimized option delivers the same high accuracy and recall rates as memory-optimized vectors at lower cost. While this option may introduce slightly higher latency, it’s ideal for use cases where sub-millisecond response times aren’t critical such as semantic search applications, recommendation systems, and other AI-powered search scenarios. Amazon OpenSearch Serverless, our fully managed deployment option, eliminates the complexities of infrastructure management for search and analytics workloads. The service automatically scales compute capacity, measured in OpenSearch Compute Units (OCUs), based on your workload demands.
Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- [Event Report] AWS GameDay for Telecom: Winning the DDoS Game
- SBI Sumishin Net Bank Adopts AWS to Cloud Account Systems — All of the Bank’s Major Systems Run on AWS Selecting AWS as Recommended Cloud Provider
AWS News Blog
AWS Cloud Operations Blog
AWS Big Data Blog
Containers
AWS Database Blog
- Dynamic view-based data masking in Amazon RDS and Amazon Aurora MySQL
- Clone Amazon RDS Custom for Oracle to Amazon EC2 using multi-volume EBS snapshots
Desktop and Application Streaming
AWS HPC Blog
AWS for Industries
- Connect to automotive or manufacturing plant displays using VNC and AWS IoT Secure Tunneling
- Amazon Q Business helps Dine resolve up to 34% more support tickets per hour
Artificial Intelligence
- Scale visual production using Stability AI Image Services in Amazon Bedrock
- Prompting for precision with Stability AI Image Services in Amazon Bedrock
- Monitor Amazon Bedrock batch inference using Amazon CloudWatch metrics
- Use AWS Deep Learning Containers with Amazon SageMaker AI managed MLflow