2026-01-16

1/16/2026, 12:00:00 AM ~ 1/19/2026, 12:00:00 AM (UTC) Recent Announcements Amazon MWAA now available in additional Region Amazon Managed Workflows for Apache Airflow (MWAA) is now available in AWS Region Asia Pacific (Thailand).\n Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure....

January 16, 2026

2026-01-15

1/15/2026, 12:00:00 AM ~ 1/16/2026, 12:00:00 AM (UTC) Recent Announcements Amazon S3 on Outposts is now available on second-generation AWS Outposts racks Amazon S3 on Outposts is now available on second-generation AWS Outposts racks for your data residency, low latency, and local data processing use cases on-premises. S3 on Outposts on second-generation Outposts racks offers three storage tiers: 196 TB, 490 TB, and 786 TB. Choose the storage tier that matches your workload, whether for production workloads, backups, or archival workloads....

January 15, 2026

2025-12-02

12/2/2025, 12:00:00 AM ~ 12/3/2025, 12:00:00 AM (UTC) Recent Announcements Announcing Amazon EC2 General purpose M8azn instances (Preview) Starting today, new general purpose high-frequency high-network Amazon Elastic Compute Cloud (Amazon EC2) M8azn instances are available for preview. These instances are powered by fifth generation AMD EPYC (formerly code named Turin) processors, offering the highest maximum CPU frequency, 5GHz in the cloud. The M8azn instances offer up to 2x compute performance versus previous generation M5zn instances....

December 2, 2025

2025-11-26

11/26/2025, 12:00:00 AM ~ 11/27/2025, 12:00:00 AM (UTC) Recent Announcements SageMaker HyperPod now supports Managed tiered KV cache and intelligent routing Amazon SageMaker HyperPod now supports Managed Tiered KV Cache and Intelligent Routing for large language model (LLM) inference, enabling customers to optimize inference performance for long-context prompts and multi-turn conversations. Customers deploying production LLM applications need fast response times while processing lengthy documents or maintaining conversation context, but traditional inference approaches require recalculating attention mechanisms for all previous tokens with each new token generation, creating computational overhead and escalating costs....

November 26, 2025

2025-11-18

11/18/2025, 12:00:00 AM ~ 11/19/2025, 12:00:00 AM (UTC) Recent Announcements Amazon EC2 P6-B300 instances with NVIDIA Blackwell Ultra GPUs are now available Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6-B300 instances, accelerated by NVIDIA Blackwell Ultra B300 GPUs. Amazon EC2 P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory....

November 18, 2025