4/27/2026, 12:00:00 AM ~ 4/28/2026, 12:00:00 AM (UTC)

Recent Announcements

Amazon Redshift Serverless AI-driven scaling is now the default for new workgroups

Amazon Redshift Serverless now makes AI-driven scaling and optimization the default for all new workgroups. AI-driven scaling uses machine learning to predict compute needs and automatically adjust resources before queries queue, delivering better price-performance without manual tuning. This release also expands support to workloads with a Base RPU range of 8–512 RPU, up from the previous 32–512 RPU, reducing the entry cost for AI-driven scaling.\n With AI-driven scaling and optimization, Amazon Redshift monitors your workload patterns and automatically adjusts compute resources based on query complexity, data volume, and expected data scan size. You can use the price-performance slider to choose whether to prioritize cost, performance, or a balance of both. Amazon Redshift also applies additional optimizations, including automatic materialized views and automatic table design optimization, to meet your selected target. To configure price-performance targets, use the AWS Management Console or Amazon Redshift API operations. You can also modify the target after you create the workgroup. Amazon Redshift Serverless AI-driven scaling and optimization is available in all AWS Regions where Amazon Redshift Serverless is available. For more information, see Amazon Redshift Serverless product page and AI-driven scaling and optimization documentation.

Amazon Redshift Serverless is now available in the AWS Asia Pacific (Melbourne) and Canada West (Calgary) regions

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS Asia Pacific (Melbourne) and Canada West (Calgary) regions. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.\n With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, Apache Iceberg in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs. To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.

Amazon Connect increases attachment file sizes and adds custom file types

Amazon Connect now supports attachment file sizes up to 100 MB for chat, cases, and tasks, up from the previous 20 MB limit. Administrators can enable these higher limits and configure custom file extensions for attachments across chat, email, cases, and tasks through the Amazon Connect admin website or Amazon Connect APIs.\n A technology company supporting enterprise customers can now accept files like diagnostic bundles and log archives up to 100 MB through chat, reducing back-and-forth and helping agents resolve issues faster. A financial services firm can add file extensions for signed contracts or compliance documents, giving customers the ability to attach paperwork directly in chat or email.

You can use these features in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Africa (Cape Town), Canada (Central), Europe (Frankfurt), and Europe (London).

To learn more, visit Amazon Connect and see Enable Attachments in the Amazon Connect Administrator Guide.

Amazon FSx for OpenZFS Single-AZ (HA) file systems are now available in 17 additional AWS commercial and AWS GovCloud (US) Regions

You can now create Amazon FSx for OpenZFS Single-AZ (HA) file systems in seventeen additional AWS Regions across the South America, Europe, Africa, Asia Pacific, and AWS GovCloud (US).\n Amazon FSx for OpenZFS provides fully managed, cost-effective, shared file storage powered by the popular OpenZFS file system. It’s designed to deliver sub-millisecond latencies and multi-GB/s throughput along with rich ZFS-powered data management capabilities (like snapshots, data cloning, and compression). Single-AZ (HA) file systems are a cost-effective solution for workloads that need high availability but don’t need storage redundancy across multiple availability zones, such as data analytics, machine learning, and semiconductor chip design.

With this expansion, FSx for OpenZFS Single-AZ (HA) file systems are now available in the following additional AWS Regions: Africa (Cape Town), Asia Pacific (Hyderabad, Jakarta, Malaysia, Osaka, Taipei, Thailand), Canada West (Calgary), Europe (Milan, Paris, Spain, Zurich), Israel (Tel Aviv), Mexico (Central), South America (São Paulo), and AWS GovCloud (US-East, US-West). To learn more about Amazon FSx for OpenZFS, visit our product page, and see the FSx for OpenZFS Region Table for complete regional availability information.

Amazon SageMaker HyperPod now supports G7e and r5d.16xlarge instances

Amazon SageMaker HyperPod now supports G7e and r5d.16xlarge instances. SageMaker HyperPod is a purpose-built infrastructure for developing, training, and deploying foundation models at scale. It provides a resilient and performant environment with built-in fault tolerance, automated cluster recovery, and optimized distributed training libraries, reducing the undifferentiated heavy lifting of managing large-scale AI/ML infrastructure. \n G7e instances are powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and deliver up to 2.3x better inference performance than G6e instances, allowing you to process more requests per second while reducing latency. With up to 768 GB of total GPU memory, G7e instances let you deploy larger language models or run multiple models on a single endpoint. You can use these instances for deploying LLMs, agentic AI, multimodal generative AI, and physical AI models. G7e instances are also well suited for cost-efficient single-node fine-tuning or training of NLP, computer vision, and smaller generative AI models, with up to 1.27x the TFLOPs and up to 4x the GPU-to-GPU bandwidth compared to G6e. In addition, HyperPod now supports r5d.16xlarge as well. The r5d.16xlarge instance provides 64 vCPUs, 512 GB of memory, and 5 x 600 GB NVMe SSD instance storage, powered by Intel Xeon Platinum 8000 series processors with a sustained all-core turbo frequency of up to 3.1 GHz. This instance is well suited for distributed training data preprocessing especially with frameworks such as Ray, large-scale feature engineering, and running memory-heavy orchestration services alongside GPU compute. G7e instances are available in US East (N. Virginia), US East (Ohio), Asia Pacific (Tokyo), and US West (Oregon) and r5d.16xlarge is available in all regions Amazon SageMaker HyperPod is available in.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Architecture Blog

AWS Cloud Financial Management

AWS Cloud Operations Blog

AWS Big Data Blog

Containers

AWS Database Blog

AWS DevOps & Developer Productivity Blog

AWS for Industries

Artificial Intelligence

AWS Security Blog

Open Source Project

AWS CLI