4/7/2026, 12:00:00 AM ~ 4/8/2026, 12:00:00 AM (UTC)

Recent Announcements

AWS Lambda expands response streaming support to all commercial AWS Regions

AWS Lambda response streaming is now available in all commercial AWS Regions, bringing full regional parity for this capability. Customers in newly supported Regions can use the InvokeWithResponseStream API to progressively stream response payloads back to clients as data becomes available.\n Response streaming enables functions to send partial responses to clients incrementally rather than buffering the entire response before transmission. This reduces time-to-first-byte (TTFB) latency and is well suited for latency-sensitive workloads such as LLM-based applications as well as web and mobile applications where users benefit from seeing responses appear incrementally. Response streaming supports payloads up to a default maximum of 200 MB.

With this expansion, customers in all commercial Regions can stream responses using the InvokeWithResponseStream API through a supported AWS SDK, or through Amazon API Gateway REST APIs with response streaming enabled. Response streaming supports Node.js managed runtimes as well as custom runtimes.

Streaming responses incur an additional cost for network transfer of the response payload. You are billed based on the number of bytes generated and streamed out of your Lambda function over the first 6 MB. To get started with Lambda response streaming, visit the AWS Lambda documentation.

AWS Cost Explorer launches Natural Language Query capabilities powered by Amazon Q

AWS Cost Explorer now brings Amazon Q Developer’s generative AI capabilities directly into your cost analysis workflows. You can now use natural language queries to ask Amazon Q questions about your AWS cost and usage data. In addition to providing answers to your question, you now also receive automatically updated visualizations in Cost Explorer. This enables faster cost analysis, reduces time to insights, and makes cost visibility accessible to every team member.\n With this launch, you can start your cost analysis with the new suggested prompts in Cost Explorer. These prompts include commonly asked cost questions like “Show me my top spending services for this month.” Amazon Q provides detailed insights while Cost Explorer simultaneously updates with the corresponding visualization, filters, and groupings. You can also ask custom questions in your own words using the new ‘Ask Question’ button, exploring your spending patterns conversationally. Cost Explorer automatically updates charts and tables when analysis is based on your cost and usage data. When Amazon Q compiles insights from additional datasets such as pricing or anomaly detection, visualizations are displayed in Amazon Q’s new artifacts panel. You can continue the conversation with follow-up questions while maintaining full context, allowing you to go from a quick cost check to a deep investigation without switching tools or breaking your workflow.

Natural language cost analysis for AWS Cost Explorer is available today in all commercial AWS Regions at no additional charge. To learn more, visit AWS Cost Explorer. To get started, see the user guide.

Amazon Lightsail is now available in the Asia Pacific (Malaysia) Region

Starting today, Amazon Lightsail is available in the Asia Pacific (Malaysia) Region. This expansion brings the power and simplicity of Lightsail to customers in Malaysia and surrounding regions.\n With this launch, customers in Malaysia and nearby countries can now enjoy lower latency and better performance for their applications while meeting local data residency requirements. The new Region provides access to Lightsail’s full range of features including instances that meet your compute needs—from general purpose to compute-optimized and memory-optimized bundles—as well as managed databases, containers, load balancers and more, all with the same simple, predictable pricing that Lightsail customers love. Lightsail is available in these AWS Regions: US East (Ohio, N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Paris, Stockholm), Asia Pacific (Malaysia, Jakarta, Mumbai, Seoul, Singapore, Sydney, Tokyo). To learn more about Regions and Availability Zones for Lightsail, please refer to the documentation. You can use this Region through the Lightsail Console, AWS Command Line Interface (CLI) and AWS SDKs.

Amazon Bedrock now offers Claude Mythos Preview (Gated Research Preview)

Amazon Bedrock, the platform for building generative AI applications and agents at production scale, now offers Claude Mythos Preview in gated research preview as part of Project Glasswing. Claude Mythos Preview is Anthropic’s most advanced AI model to date, representing a fundamentally new model class with state-of-the-art capabilities across cybersecurity, software coding, and complex reasoning tasks. The model can identify sophisticated security vulnerabilities in software and demonstrate exploitability, comprehending large codebases and delivering actionable findings with less manual guidance than previous AI models. This enables security teams to accelerate defensive cybersecurity work, find and fix security vulnerabilities in the world’s most critical software, and address these issues before threats emerge.\n Claude Mythos Preview signals an upcoming wave of AI models with powerful cybersecurity capabilities. Anthropic and AWS are taking a deliberately cautious approach to release, prioritizing internet-critical companies and open-source maintainers whose software and digital services impact hundreds of millions of users. This approach gives defenders the opportunity to strengthen their codebases and share what they learn so the whole industry can benefit. Claude Mythos Preview is available in gated preview in the US East (N. Virginia) Region through Amazon Bedrock. Access is limited to an initial allow-list of organizations. If your organization has been allow-listed, your AWS account team will reach out directly. For AWS CISO Amy Herzog’s perspective on this launch and what it means for the future of cybersecurity, read Building AI Defenses at Scale: Before the Threats Emerge.

Amazon SageMaker adds serverless workflows to Identity Center domains

Amazon SageMaker Unified Studio now supports Serverless Workflows in Identity Center domains.  With this launch, customers using Identity Center domains can orchestrate data processing tasks with Apache Airflow (powered by Managed Workflows for Apache Airflow) without provisioning or managing Airflow infrastructure. Serverless Workflows were previously available only in IAM-based domains. \n Serverless Workflows automatically provision compute resources when a workflow runs and release them when it completes, so you only pay for actual workflow run time. Each workflow runs with its own execution role and isolated worker, providing workflow-level security and preventing cross-workflow interference. With Serverless Workflows, Identity Center domain customers also get access to the Visual Workflow experience with support for around 200 operators, including built-in integration with AWS services such as Amazon S3, Amazon Redshift, Amazon EMR, AWS Glue, and Amazon SageMaker AI.

Serverless Workflows in Identity Center domains are available in all AWS Regions where SageMaker Unified Studio is supported. To learn more, visit the Serverless Workflows documentation.

Announcing Amazon S3 Files, making S3 buckets accessible as file systems

S3 Files delivers a shared file system that connects any AWS compute resource directly with your data in Amazon S3. With S3 Files, Amazon S3 is the first and only cloud object store that provides fully-featured, high-performance file system access to your data. It provides full file system semantics and low-latency performance, without your data ever leaving S3. That means file-based applications, agents, and teams can now access and work with your S3 data as a file system using the tools they already depend on. Built using Amazon EFS, S3 Files gives you the performance and simplicity of a file system with the scalability, durability, and cost-effectiveness of S3. You no longer need to duplicate your data or cycle it between object storage and file system storage. S3 Files maintains a view of the objects in your bucket and intelligently translates your file system operations into efficient S3 requests on your behalf. Your file-based applications run on your S3 data with no code changes, AI agents persist memory and share state across pipelines, and ML teams run data preparation workloads without duplicating or staging files first. Now, file-based tools and applications across your organization can work with your S3 data directly from any compute instance, container, and function using the tools your teams and agents already depend on. \n   Organizations store their analytics data and data lakes in S3, but file-based tools, agents, and applications have never been able to directly work with that data. Bridging that gap meant managing a separate file system, duplicating data, and building complex pipelines to keep object and file storage in sync. S3 Files eliminates that friction and overhead. Using S3 Files, your data is accessible through the file system and directly through S3 APIs at the same time. Thousands of compute resources can connect to the same S3 file system simultaneously, enabling shared access across clusters without duplicating data. S3 Files works with all of your new and existing data in S3 buckets, with no migration required.    S3 Files caches actively used data for low-latency access and provides up to multiple terabytes per second of aggregate read throughput, so storage never limits performance. There are no data silos, no synchronization complexities, and no tradeoffs. File and object storage, together in one place without compromise.

S3 Files is now generally available in 34 AWS Regions. For the full list of supported Regions, visit the AWS Capabilities tool. To learn more, visit the product page, S3 pricing page, documentation, and AWS News Blog.

Amazon RDS for Oracle now supports M8i and R8i instances

Amazon RDS for Oracle now supports M8i and R8i instances. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and R8i instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances.\n M8i and R8i instances are available for Amazon RDS for Oracle in Bring Your Own License model for Oracle Database Enterprise Edition (EE) and Oracle Database Standard Edition 2 (SE2) . To use the new M8i and R8i instances, you can modify your existing RDS database instance or create a new RDS database instance from the RDS Management Console, or using the AWS SDK or CLI. See Amazon RDS for Oracle Pricing for up-to-date pricing and regional availability.

Amazon Braket adds support for Rigetti’s 108-qubit Cepheus QPU

Amazon Braket, the quantum computing service from AWS, now offers access to Rigetti’s Cepheus-1-108Q device, the first 100+ qubit superconducting quantum processing unit (QPU) available on Amazon Braket. Cepheus-1-108Q uses Rigetti’s modular multi-chip architecture, consisting of a 3x4 array of twelve 9-qubit chiplets with tunable couplers and intermodule couplers between chiplets.\n Cepheus-1-108Q introduces CZ (controlled phase) gates, replacing the iSWAP gates used on previous Rigetti QPUs. CZ gates provide higher resilience to phase errors common in superconducting systems, and Rigetti’s adiabatic CZ implementation further reduces leakage errors. These improvements enable customers to run deeper circuits for use cases such as chemical simulation, combinatorial optimization, and machine learning. Customers can build and run quantum programs using the Braket SDK or other frameworks such as Qiskit, CUDA-Q, and Pennylane. Pulse-level control is also available for researchers who need low-level hardware access to study noise, develop gates, or devise error mitigation schemes.

Cepheus-1-108Q is available in the US West (N. California) Region. Get started by viewing the device on the Amazon Braket Management Console, reading our Amazon Braket documentation, or applying for AWS credits to support experiments on Amazon Braket through the AWS Cloud Credits for Research program.

AWS Transfer Family now supports IPv6 for connectors and web apps

AWS Transfer Family announces Internet Protocol version 6 (IPv6) support for SFTP connectors, AS2 connectors, and Transfer Family web apps. This enhancement enables your connectors to reach remote servers and trading partners over IPv6 and allows end users to access Transfer Family web apps using IPv6.\n AWS Transfer Family offers fully managed support for file transfers over SFTP, AS2, FTPS, FTP, and web browser-based transfers. With IPv6 support for connectors, you can now reach trading partners and remote servers that have adopted IPv6, eliminating connectivity barriers as partners transition away from IPv4. For Transfer Family web apps, IPv6 support enables end users to upload and download files from IPv6-native networks and devices. With dual-stack support across both connectors and web apps, you can communicate with both IPv4 and IPv6 systems and transition at your own pace.

AWS Transfer Family IPv6 support for SFTP connectors, AS2 connectors, and web apps is available in the majority of AWS regions where AWS Transfer Family is offered. For the full list of supported regions, visit the AWS Capabilities tool in Builder Center. To learn more, visit the Transfer Family User Guide.

Amazon Aurora now supports PostgreSQL 17.9, 16.13, 15.17, and 14.22

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL versions 17.9, 16.13, 15.17, and 14.22 that includes bug fixes from the PostgreSQL community and Aurora-specific enhancements. We recommend upgrading to the latest minor versions to address known security vulnerabilities and benefit from these improvements, as detailed in these release notes.\n You can upgrade your databases during scheduled maintenance windows using automatic minor version upgrades. To simplify operations at scale, enable automatic minor version upgrades and use the AWS Organizations Upgrade Rollout Policy to orchestrate thousands of upgrades in phases, first to development environments before upgrading production systems. You can also use Aurora’s zero-downtime patching to minimize downtime for minor version upgrades. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full PostgreSQL compatibility. It provides scale-to-zero serverless compute, Aurora Global Database for multi-Region resilience, Aurora I/O-Optimized for improved price performance on I/O-intensive workloads, and built-in security and continuous backups. To get started, take a look at our getting started page.

AWS Certificate Manager now supports native certificate search

AWS Certificate Manager (ACM) now provides a search bar in the console that customers can use to find certificates using one or more certificate parameters such as domain name, certificate ARN, and/or certificate validity. For example, ACM users who manages multiple certificates can search for certificates with specific domains that are due to expire soon.\n To get started, use the new SearchCertificates API, or navigate to the ACM console and use the search bar to search by one or more certificate parameters. This feature is available in all Public AWS, AWS China, and AWS GovCloud regions. To learn more about this feature, please refer to Search Certificates. You can learn more about ACM and get started here.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

AWS Database Blog

Artificial Intelligence

AWS for M&E Blog

AWS Security Blog

AWS Storage Blog

Open Source Project

AWS CLI

OpenSearch

Amplify for iOS

Firecracker

AWS Load Balancer Controller