11/6/2024, 12:00:00 AM ~ 11/7/2024, 12:00:00 AM (UTC)

Recent Announcements

Amazon Data Firehose support for delivering data into Apache Iceberg tables is available in additional AWS Regions

Amazon Data Firehose support for delivering data streams into Apache Iceberg tables in Amazon S3 is now available in all AWS regions except AWS China, AWS GovCloud and ap-southeast-5 regions.\n With this feature, Firehose integrates with Apache Iceberg, so customers can deliver data streams directly into Apache Iceberg tables in their Amazon S3 data lake. Firehose can acquire data streams from Kinesis Data Streams, Amazon MSK, or Direct PUT API, and is also integrated to acquire streams from AWS Services such as AWS WAF web ACL logs, Amazon CloudWatch Logs, Amazon VPC Flow Logs, AWS IOT, Amazon SNS, AWS API Gateway Access logs and many others listed here. Customers can stream data from any of these sources directly into Apache Iceberg tables in Amazon S3, and avoid multi-step processes. Firehose is serverless, so customers can simply setup a stream by configuring the source and destination properties, and pay based on bytes processed. The new feature also allows customers to route records in a data stream to different Apache Iceberg tables based on the content of the incoming record. To route records to different tables, customers can configure routing rules using JSON expressions. Additionally, customers can specify if the incoming record should apply a row-level update or delete operation in the destination Apache Iceberg table, and automate processing for data correction and right to forget scenarios. To learn more and get started, visit Amazon Data Firehose documentation, pricing, and console.

AWS Well-Architected adds enhanced implementation guidance

Today, we are announcing updates to the AWS Well-Architected Framework, featuring comprehensive guidance to help customers build and operate secure, high-performing, resilient, and efficient workloads on AWS. This update includes 14 newly refreshed best practices, including the Reliability Pillar, representing the first major improvements since 2022.\n The refreshed Framework offers prescriptive guidance, expanded best practices, and updated resources to help customers tailor AWS recommendations to their specific needs, accelerating cloud adoption and applying best practices more effectively. These updates strengthen workload security, reliability, and efficiency, empowering organizations to scale confidently and build resilient, sustainable architectures. The Reliability Pillar, in particular, provides deeper insights for creating dependable cloud solutions. What partners are saying about the updated guidance: Well-Architected Partner, 6Pillar, CEO, Lorenzo Modesto “While the updated content that the AWS Well-Architected Team is generating is massively helpful for both WA Partners and those AWS Consulting Partners who want to become WA Partners, what’s most powerful is the focus on partners automating their WA practices.” The updated AWS Well-Architected Framework is available now for all AWS customers. Updates in this release will be incorporated into the AWS Well-Architected Tool in future releases, which you can use to review your workloads, address important design considerations, and help you follow the AWS Well-Architected Framework guidance. To learn more about the AWS Well-Architected Framework, visit the AWS Well-Architected Framework documentation.

Announcing an improved self-guided experience for AWS Partner Central

AWS is improving the self-guided experience for AWS Partners by adding task categorization and grouping. The new experience helps partners prioritize the key actions needed to accelerate their journey from onboarding to AWS Partner Central to selling on AWS Marketplace.\n This new experience makes it easier to quickly understand the benefits of a task, time required, and additional resources available to complete the task. This helps Partners better triage, prioritize, and delegate tasks as needed. We are also introducing task categories, such as Account, Solution, and Program tasks. Account tasks help partners set up or link their AWS Marketplace accounts, and onboard new Partner Central users. Program tasks recommend relevant programs, guide partners through onboarding, and prompt partners to complete any pending requirements to qualify for the program benefits. Solution tasks allow partners to track the progress of their solution development across the build/market/sell/grow stages of the Partner Profitability Framework, as they complete their solution-based journey and list in AWS Marketplace. The new Task experience is available to all AWS Partners globally by logging in to AWS Partner Central and accessing “My tasks” from the AWS Partner Central top navigation. Visit the AWS Partner Network site to learn more about becoming an AWS Partner.

Amazon Redshift Multi-AZ is generally available for RA3 clusters in 3 additional AWS regions

Amazon Redshift is announcing the general availability of Multi-AZ deployments for RA3 clusters in the Asia Pacific (Malaysia), Europe (London) and South America (Sao Paulo) AWS regions. Redshift Multi-AZ deployments support running your data warehouse in multiple AWS Availability Zones (AZ) simultaneously and continue operating in unforeseen failure scenarios. A Multi-AZ deployment raises the Amazon Redshift Service Level Agreement (SLA) to 99.99% and delivers a highly available data warehouse for the most demanding mission-critical workloads.\n Enterprise customers with mission critical workloads require a data warehouse with fast failover times and simplified operations that minimizes impact to applications. Redshift Multi-AZ deployment helps meet these demands by reducing recovery time and automatically recovering in another AZ during an unlikely event such as an AZ failure. A Redshift Multi-AZ data warehouse also maximizes query processing throughput by operating in multiple AZs and using compute resources from both AZs to process read and write queries. Amazon Redshift Multi-AZ is now generally available for RA3 clusters through the Redshift Console, API and CLI. For all regions where Multi-AZ is available, see the supported AWS regions. To learn more about Amazon Redshift Multi-AZ, see the Amazon Redshift Reliability page and Amazon Redshift Multi-AZ documentation page.

Amazon EC2 High Memory instances now available in Asia Pacific (Mumbai) region

Starting today, Amazon EC2 High Memory instances with 9TB of memory (u-9tb1.112xlarge) are available in the Asia Pacific (Mumbai) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.\n Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory. For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.

Amazon S3 Access Grants is now available in the AWS Canada West (Calgary) Region

You can now create Amazon S3 Access Grants in the AWS Canada West (Calgary) Region.\n Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity. To learn more about Amazon S3 Access Grants, visit our product detail page, and see the S3 Access Grants Region Table for complete regional availability information.

Amazon CloudFront no longer charges for requests blocked by AWS WAF

Effective October 25, 2024, all CloudFront requests blocked by AWS WAF are free of charge. With this change, CloudFront customers will never incur request fees or data transfer charges for requests blocked by AWS WAF. This update requires no changes to your applications and applies to all CloudFront distributions using AWS WAF.\n AWS WAF will continue billing for evaluating and blocking these requests. To learn more about using AWS WAF with CloudFront, visit Use AWS WAF protections in the CloudFront Developer Guide.

Amazon DataZone Achieves HITRUST Certification

Amazon DataZone has achieved HITRUST certification, demonstrating it meets the requirements established by the Health Information Trust Alliance Common Security Framework (HITRUST CSF) for managing sensitive health data, as required by healthcare and life sciences customers.\n This certification includes the testing of over 600 controls derived from multiple security frameworks such as ISO 27001 and NIST 800-53r5, providing a comprehensive set of baseline security and privacy controls. The 2024 AWS HITRUST certification is now available to AWS customers through AWS Artifact in the AWS Management Console. Customers can leverage the certification to meet applicable controls via HITRUST’s Inheritance Program as defined under the HITRUST Shared Responsibility Matrix (SRM). Amazon DataZone is a data management service that makes it faster and easier for customers to catalog, discover, share, and govern data between data producers and consumers within their organization. For more information about Amazon DataZone and how to get started, refer to our product page and review the Amazon DataZone technical documentation.

AWS announces availability of Microsoft Windows Server 2025 images on Amazon EC2

Amazon EC2 now supports Microsoft Windows Server 2025 with License Included (LI) Amazon Machine Images (AMIs), providing customers with an easy and flexible way to launch the latest version of Windows Server. By running Windows Server 2025 on Amazon EC2, customers can take advantage of the security, performance, and reliability of AWS with the latest Windows Server features.\n Amazon EC2 is the proven, reliable, and secure cloud for your Windows Server workloads. Amazon creates and manages Microsoft Windows Server 2025 AMIs providing a reliable and quick way to launch Windows Server 2025 on EC2 instances. These images support Nitro-based instances with Unified Extensible Firmware Interface (UEFI) to provide enhanced security. These images also come with features such as Amazon EBS gp3 as the default root volume and the AWS NVMe driver pre-installed, which give you faster throughput and maximize price-performance. In addition, you can seamlessly use these images with pre-qualified services such as AWS Systems Manager, Amazon EC2 Image Builder, and AWS License Manager. Windows Server 2025 AMIs are available in all commercial AWS Regions and the AWS GovCloud (US) Regions. You can find and launch instances directly from the Amazon EC2 console or through API or CLI commands. All instances running Windows Server 2025 AMIs are billed under the EC2 pricing for Windows operating system (OS). To learn more about the new AMIs, see AWS Windows AMI reference. To learn more about running Windows Server 2025 on Amazon EC2, visit the Windows Workloads on AWS page.

Amazon S3 Access Grants now integrate with Amazon Redshift

Amazon S3 Access Grants now integrate with Amazon Redshift. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta, to datasets stored in Amazon S3, helping you to easily manage data permissions at scale. This integration gives customers the ability to manage S3 permissions for Redshift users, without the need to write and maintain bucket policies or individual IAM roles.\n Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in your IdP by connecting S3 with AWS Identity Center. Then, when you use Identity Center authentication for Redshift, end users in the appropriate user groups will automatically have permission to read and write data in S3 using COPY, UNLOAD and CREATE LIBRARY SQL commands. S3 Access Grants then automatically update S3 permissions as users are added and removed from user groups in the IdP. Amazon S3 Access Grants with Amazon Redshift are available for users federated via IdP in all AWS Regions where AWS IAM Identity Center is available. For pricing details, visit Amazon S3 pricing and Amazon Redshift pricing. To learn more about S3 Access Grants, refer to the documentation.

AWS CodeBuild now supports additional compute types for reserved capacity

AWS CodeBuild now supports 18 new compute options for your reserved capacity fleets. You can select up to 96 vCPUs and 192 GB of memory to build and test your software applications on Linux x86, Arm, and Windows platforms. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.\n Customers using reserved capacity can now access the new compute types by configuring vCPU, memory size, and disk space attributes on the fleets. With the addition of these new types, you now have a wider range of compute options across different Linux and Windows platforms for your workloads. The new compute types are now available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt) where reserved capacity fleets are supported. To learn more about compute options in reserved capacity, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.

Six new synthetic generative voices for Amazon Polly

Today, we are excited to announce the general availability of six highly expressive Amazon Polly generative voices in English, French, Spanish, and German.\n Amazon Polly is a managed service that turns text into lifelike speech, allowing you to create applications that talk and to build speech-enabled products depending on your business needs. The generative engine is Amazon Polly’s most advanced text-to-speech (TTS) model. Today, we release six new synthetic female-sounding generative voices: i.e., Ayanda (South African English), Léa (French), Lucia (European Spanish), Lupe (American Spanish), Mía (Mexican Spanish), and Vicki (German). This launch increases the number of generative Polly voices from seven to thirteen and expands our footprint from three to nine locales. Leveraging the same Gen-AI technology that powered the English generative voices, Polly now supports German, Spanish, and French to provide our customers with more options of highly expressive and engaging voices. Ayanda, Léa, Lucia, Lupe, Mía, and Vicki generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.

AWS Security Hub launches 7 new security controls

AWS Security Hub has released 7 new security controls, increasing the total number of controls offered to 437. Security Hub released new controls for Amazon Simple Notification Service (Amazon SNS) topic and AWS Key Management Service (AWS KMS) keys checking for public access. Security Hub now supports additional controls for encryption checks for key AWS services such as AWS AppSync and Amazon Elastic File System (Amazon EFS). For the full list of recently released controls and the AWS Regions in which they are available, visit the Security Hub user guide.\n To use the new controls, turn on the standard they belong to. Security Hub will then start evaluating your security posture and monitoring your resources for the relevant security controls. You can use central configuration to do so across all your organization accounts and linked Regions with a single action. If you are already using the relevant standards and have Security Hub configured to automatically enable new controls, these new controls will run without taking any additional action.

To get started, consult the following list of resources:

Learn more about Security Hub capabilities and features in the AWS Security Hub user guide

Subscribe to the Security Hub SNS topic to receive notifications about new Security Hub features and controls

Try Security Hub at no cost for 30 days on the AWS Free Tier.

Amazon EC2 R8g instances now available in AWS Europe (Ireland)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Europe (Ireland) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.\n AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

New Kinesis Client Library 3.0 reduces stream processing compute costs by up to 33%

You can now reduce compute costs to process streaming data with Kinesis Client Library (KCL) 3.0 by up to 33% compared to previous KCL versions. KCL 3.0 introduces an enhanced load balancing algorithm that continuously monitors resource utilization of the stream processing workers and automatically redistributes the load from over-utilized workers to other underutilized workers. This ensures even CPU utilization across workers and removes the need to over-provision the stream processing compute workers which reduces cost. Additionally, KCL 3.0 is built with the AWS SDK for Java 2.x for improved performance and security features, fully removing the dependency on the AWS SDK for Java 1.x.\n KCL is an open-source library that simplifies the development of stream processing applications with Amazon Kinesis Data Streams. It manages complex tasks associated with distributed computing such as load balancing, fault tolerance, and service coordination, allowing you to solely focus on your core business logic. You can upgrade your stream processing application running on KCL 2.x by simply replacing the current library using KCL 3.0, without any changes in your application code. KCL 3.0 supports stream processing applications running on Amazon EC2 instances or containers such as Amazon ECS, Amazon EKS, or AWS Fargate. KCL 3.0 is available with Amazon Kinesis Data Streams in all AWS regions. To learn more, see the Amazon Kinesis Data Streams developer guide, KCL 3.0 release notes, and launch blog.

Amazon MSK now supports vector embedding generation using Amazon Bedrock

Amazon MSK (Managed Streaming for Apache Kafka) now supports new Managed Streaming for Apache Flink blueprints to generate vector-embeddings using Amazon Bedrock, making it easier to build real-time AI applications powered by up-to-date, contextual data. This blueprint simplifies the process of incorporating the latest data from your Amazon MSK streaming pipelines into your generative AI models, eliminating the need to write custom code to integrate real-time data streams, vector databases, and large language models.\n With just a few clicks, customers can configure the blueprint to continuously generate vector embeddings using Bedrock’s embedding models, then index those embeddings in Amazon OpenSearch for their Amazon MSK data streams. This allows customers to combine the context from real-time data with Bedrock’s powerful large language models to generate accurate, up-to-date AI responses without writing custom code. Customers can also choose to improve the efficiency of data retrieval using built-in support for data chunking techniques from LangChain, an open-source library, supporting high-quality inputs for model ingestion. The blueprint manages the data integration and processing between MSK, the chosen embedding model, and the Open Search vector store, allowing customers to focus on building their AI applications rather than managing the underlying integration.

Real-time vector embedding blueprint is generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Paris), Europe (London), Europe (Ireland) and South America (Sao Paulo) AWS Regions. Visit the Amazon MSK documentation for the list of additional Regions, which will be supported over the next few weeks. To learn more about how to use the blueprint to generate real-time vector embeddings from your Amazon MSK data, visit the AWS blog.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Open Source Blog

AWS Architecture Blog

AWS Big Data Blog

AWS Compute Blog

Containers

AWS Database Blog

AWS for Industries

AWS Machine Learning Blog

AWS for M&E Blog

Open Source Project

AWS CLI