12/11/2024, 12:00:00 AM ~ 12/12/2024, 12:00:00 AM (UTC)

Recent Announcements

Amazon EC2 M7i-flex & M7i instances now available in Asia Pacific (Jakarta) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex and M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in Asia Pacific (Jakarta) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.\n M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads. They deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don’t fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices. M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit the EC2 M7i/M7i-flex instances Page.

Amazon SageMaker AI announces availability of P5e and G6e instances for Inference

We are pleased to announce general availability of inference optimized G6e instances (powered by NVIDIA L40S Tensor Core GPUs) and P5e (powered by NVIDIA H200 Tensor Core GPUs) on Amazon SageMaker.\n With 1128 GB of high bandwidth GPU memory across 8 NVIDIA H200 GPUs, 30 TB of local NVMe SSD storage, 192 vCPUs, and 2 TiB of system memory, ml.p5e.48xlarge instances can deliver exceptional performance for compute-intensive AI inference workloads such as large language model with 100B+ parameters, multi-modal foundation models, synthetic data generation, and complex generative AI applications including question answering, code generation, video, and image generation. Powered by 8 NVIDIA L40s Tensor Core GPUs with 48 GB of memory per GPU and third generation AMD EPYC processors ml.g6e instances can deliver up to 2.5x better performance compared to ml.g5 instances. Customers can use ml.g6e instances to run AI Inference for large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio. The ml.p5e and ml.g6e instances are now available for use on SageMaker in US East (Ohio) and US West (Oregon). To get started, simply request a limit increase through AWS Service Quotas. For pricing information on these instances, please visit our pricing page. For more information on deploying models with SageMaker, see the overview here and the documentation here. To learn more about these instances in general, please visit the P5e and G6e product pages.

AWS Security Hub now supports PCI DSS v4.0.1 standard

AWS Security Hub now supports automated security checks aligned to the Payment Card Industry Data Security Standard (PCI DSS) v4.0.1. PCI DSS is a compliance framework that provides a set of rules and guidelines for safely handling credit and debit card information. PCI DSS standard in Security Hub provides a set of AWS security best practices that support you in protecting your cardholder data environments (CDE). Security Hub PCI DSS v4.0.1 includes 144 automated controls that conduct continual checks against PCI DSS requirements.\n The new standard is now available in all public AWS Regions where Security Hub is available and in the AWS GovCloud (US) Regions. To quickly enable the new standard across your AWS environment, we recommend you using Security Hub central configuration. This will allow you to enable the standard in some or all of your organization accounts and across all AWS Regions that are linked to Security Hub with a single action. If you currently use PCI v3.2.1 standard in Security Hub, but want to use only v4.0.1, enable the newer version before disabling the older version. This prevents gaps in your security checks. To get started, consult the following list of resources:

Learn more about Security Hub capabilities and features in the AWS Security Hub user guide

Subscribe to the Security Hub SNS topic to receive notifications about new Security Hub features and controls

Try Security Hub at no cost for 30 days on the AWS Free Tier.

Amazon Connect now supports push notifications for mobile chat

Amazon Connect now supports push notifications for mobile chat on iOS and Android devices, improving the customer experience and enabling faster issue resolution. Amazon Connect makes it easy to offer mobile chat experiences using the Amazon Connect Chat SDKs or a webview solution using the communications widget. Now, with built-in push notifications enabled for mobile chat experiences, customers will be proactively notified as soon as they receive a new message from an agent or chatbot, even when they are not actively chatting.\n Push notifications for mobile chat is available in the US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (London) regions. To learn more and get started, visit the help documentation or the Amazon Connect website.

Amazon EC2 M8g instances now available in AWS Europe (Spain)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Europe (Spain) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.\n AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 M8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Amazon Lex launches new multilingual speech recognition models

We are excited to announce the general availability of new multilingual streaming speech recognition models (ASR-2.0) in Amazon Lex. These models enhance recognition accuracy through two specialized groupings: one European-based model supporting Portuguese, Catalan, French, Italian, German, and Spanish, and another Asia Pacific-based model supporting Chinese, Korean, and Japanese.\n These Amazon Lex multilingual streaming models leverage shared language patterns within each group to deliver improved recognition accuracy. The models particularly excel at recognizing alphanumeric speech, making it easier to accurately understand customer utterances that are often needed to identify callers and automate tasks in Interactive Voice Response (IVR) applications. For example, the new models better recognize account numbers, confirmation numbers, serial numbers, and product codes. These improvements extend to all regional variants of supported languages (for example, both European French and Canadian French will benefit from this enhancement). Additionally, the new models demonstrate improved recognition accuracy for non-native speakers and various regional accents, making interactions more inclusive and reliable. These models are now the standard for supported languages in Amazon Lex, and customers simply need to rebuild their existing bots to take advantage of these improvements. The new ASR-2.0 models are now available in all regions that support Amazon Lex V2.

Amazon Keyspaces (for Apache Cassandra) announces frozen collections in AWS GovCloud (US) Regions

Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service that offers 99.999% availability.\n Today, Amazon Keyspaces added support for frozen collections in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. With support for frozen collections, the primary keys in your tables can contain collections, allowing you to index your tables on more complex and richer data types. Additionally, using frozen collections, you can create nested collections. Nested collections enable you to model your data in a more real-world way and efficient manner. The AWS console extends the native Cassandra experience by giving you the ability to intuitively create and view nested collections that are several levels deep.

Support for frozen collections is available in all commercial AWS Regions and the AWS GovCloud (US) Regions where AWS offers Amazon Keyspaces. If you’re new to Amazon Keyspaces, the getting started guide shows you how to provision a keyspace and explore the query and scaling capabilities of Amazon Keyspaces.

AWS Network Firewall is now available in the AWS Asia Pacific (Malaysia) region

Starting today, AWS Network Firewall is available in the AWS Asia Pacific (Malaysia) Region, enabling customers to deploy essential network protections for all their Amazon Virtual Private Clouds (VPCs).\n AWS Network Firewall is a managed firewall service that is easy to deploy. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up and maintain the underlying infrastructure. It is integrated with AWS Firewall Manager to provide you with central visibility and control over your firewall policies across multiple AWS accounts. To see which regions AWS Network Firewall is available in, visit the AWS Region Table. For more information, please see the AWS Network Firewall product page and the service documentation.

Amazon EC2 R8g instances now available in AWS Asia Pacific (Tokyo)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Asia Pacific (Tokyo) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.\n AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Amazon Keyspaces (for Apache Cassandra) now supports User-Defined Types in AWS GovCloud (US) Regions

Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service that offers 99.999% availability.\n Today, Amazon Keyspaces added support for Cassandra’s User Defined Types (UDTs) in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. With support for UDTs, you can continue using any custom data types that are defined in your Cassandra workloads in Keyspaces without making schema modifications. With this launch, you can use UDTs in the primary key of your tables, allowing you to index your data on more complex and richer data types. Additionally, UDTs enable you to create data models that are more efficient and similar to the data hierarchies that exist in real-world data. The AWS console enhances the original Cassandra experience by allowing you to easily create and visualize nested UDTs at multiple levels. Support for UDTs is available in all commercial AWS Regions and the AWS GovCloud (US) Regions where AWS offers Amazon Keyspaces. If you’re new to Amazon Keyspaces, the getting started guide shows you how to provision a keyspace and explore the query and scaling capabilities of Amazon Keyspaces.

Amazon SES now offers Global Endpoints for multi-region sending resilience

Today, Amazon Simple Email Services (SES) announces the availability of Global Endpoints, a feature for resilient sending through two commercial AWS Regions. Global Endpoints works with SES APIv2 and allows customers to choose a primary and secondary Region which accommodate email sending workloads in an equal split under normal circumstances. If either region suffers an impairment, traffic shifts away from the affected Region towards the other, ensuring that email sending continues.\n Unlike manual multi-region setups, Global Endpoints simplifies the synchronization of verified identities, approved sending limits, and configuration sets between the two chosen Regions. By working in a load-balanced manner, both Regions have active IP addresses ready for any customer email sending activity, and no manual effort is required to redistribute sending jobs during outages. Customers make use of a multi-region endpoint ID in place of an individual Region endpoint ID, without needing to make changes to their sending configurations. The feature also works with existing SES features, such as Dedicated IPs and Virtual Deliverability Manager. Global Endpoints is available across all commercial AWS Regions where SES sending is already available. A new blog describing the solution in detail is available here, and you can also click here for more information about Global Endpoints and to begin the simple, guided onboarding process for initial setup.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Japan Startup Blog (Japanese)

AWS News Blog

AWS Cloud Operations Blog

AWS Big Data Blog

AWS Compute Blog

Containers

AWS Database Blog

AWS HPC Blog

AWS for Industries

AWS Machine Learning Blog

AWS Messaging & Targeting Blog

Networking & Content Delivery

AWS Quantum Technologies Blog

AWS Security Blog

Open Source Project

AWS CLI

AWS CDK

OpenSearch

Amplify for JavaScript

Amplify for iOS

Bottlerocket OS

Karpenter