10/1/2024, 12:00:00 AM ~ 10/2/2024, 12:00:00 AM (UTC)

Recent Announcements

New VMware Strategic Partner Incentive (SPI) for Managed Services in AWS Partner Central

Today, Amazon Web Services, Inc. (AWS) announces a new VMware SPI for Managed Services as part of Migration Acceleration Program (MAP) in AWS Partner Central. Eligible AWS Partners who also provide manage services post migration, can now leverage the VMware SPI for Managed Services to accelerate VMware customer migration opportunities.\n This new VMware SPI for Managed Services is available through the enhanced MAP template in AWS Partner Central which provides better speed to market with fewer AWS approval stages. With this enhancement, the AWS Partner Funding Portal (APFP) automatically calculates the eligible VMware SPI for Managed Services improving overall partner productivity by eliminating manual steps. The VMware SPI for Managed Services is now available for all Partners in Services path and in Validated or higher stage including all AWS Migration and Modernization Competency Partners. To learn more, review the 2024 APFP user guide

Amazon Redshift launches RA3.large instances

Amazon Redshift launches RA3.large, a new smaller size in the RA3 node type with 2 vCPU and 16 GiB memory. You can now benefit from RA3.large as it gives more flexibility in compute options to choose from based on your workload requirements.\n Amazon Redshift RA3.large offers all the innovation of Redshift Managed Storage (RMS), including scaling and paying for compute and storage independently, data sharing, write operations support for concurrency scaling, Zero-ETL, and Multi-AZ. Along with already available sizes in the RA3 node type, RA3.16xlarge, RA3.4xlarge and RA3.xlplus, now with the introduction of RA3.large, you have even more compute sizing options to choose from to address the diverse workload and price performance requirements. To get started with RA3.large, you can create a cluster with the AWS Management Console or the create cluster API. To upgrade cluster from your Redshift DC2 environment to an RA3 cluster, you can take a snapshot of your existing cluster and restore it to an RA3 cluster, or do a resize from your existing cluster to a new RA3 cluster. To learn more about the RA3 node type, see the cluster management guide and the ’Upgrading to RA3 node type’ documentation. You can find more information on pricing by visiting the Amazon Redshift pricing page. RA3.large is generally available in all commercial regions where the RA3 node type is available. For more details on regional availability, see the ’RA3 node type availability’ documentation.

AWS announces Reserved Nodes flexibility for Amazon ElastiCache

Today we’re announcing enhancements to Amazon ElastiCache Reserved Nodes that make them flexible and easier to use, helping you get the most out of your reserved nodes discount. Reserved nodes provide you with a significant discount compared to on-demand node prices, enabling you to optimize costs based on your expected usage.\n Previously, you needed to purchase a reservation for a specified node type (e.g. cache.r7g.xlarge) and would only be eligible for a discount on the given type with no flexibility. With this feature, ElastiCache reserved nodes offer size flexibility within an instance family (or node family) and AWS region. This means that your existing discounted reserved node rate will be applied automatically to usage of all sizes in the same node family. For example, if you purchase a r7g.xlarge reserved node and need to scale to a larger node such as r7g.2xlarge, your reserved node discounted rate is automatically applied to 50% usage of the r7g.2xlarge node in the same AWS Region. The size flexibility capability will reduce the time that you need to spend managing your reserved nodes. With this feature, you can get the most out of your discount even if your capacity needs change. Amazon ElastiCache reserved node size flexibility is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To learn more, visit Amazon ElastiCache, the ElastiCache user guides and our blog post.

Amazon Data Firehose delivers data streams into Apache Iceberg format tables in Amazon S3

Amazon Data Firehose (Firehose) can now deliver data streams into Apache Iceberg tables in Amazon S3.\n Firehose enables customers to acquire, transform, and deliver data streams into Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and other destinations for analytics. With this new feature, Firehose integrates with Apache Iceberg, so customers can deliver data streams directly into Apache Iceberg tables in their Amazon S3 data lake. Firehose can acquire data streams from Kinesis Data Streams, Amazon MSK, or Direct PUT API, and is also integrated to acquire streams from AWS Services such as AWS WAF web ACL logs, Amazon CloudWatch Logs, Amazon VPC Flow Logs, AWS IOT, Amazon SNS, AWS API Gateway Access logs and many others listed here. Customers can stream data from any of these sources directly into Apache Iceberg tables in Amazon S3, and avoid multi-step processes. Firehose is serverless, so customers can simply setup a stream by configuring the source and destination properties, and pay based on bytes processed. The new feature also allows customers to route records in a data stream to different Apache Iceberg tables based on the content of the incoming record. To route records to different tables, customers can configure routing rules using JSON expressions. Additionally, customers can specify if the incoming record should apply a row-level update or delete operation in the destination Apache Iceberg table, and automate processing for data correction and right to forget scenarios. To get started, visit Amazon Data Firehose documentation, pricing, and console.

Amazon Connect launches the ability to initiate outbound SMS contacts

Amazon Connect now supports the ability to initiate outbound SMS contacts, enabling you to help increase customer satisfaction by engaging your customers on their preferred communication channel. You can now deliver proactive SMS experiences for scenarios such as post-contact surveys, appointment reminders, and service updates, allowing customers to respond at their convenience. Additionally you can offer customers the option to switch to SMS while waiting in a call queue, eliminating their hold time.\n To get started, add the new Send message block to a contact flow or use the new StartOutboundChatContact API to initiate outbound SMS contacts. This feature is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), and Europe (London). To learn more and get started, please refer to the documentation for the Send message flow block and StartOutboundChatContact API.

Amazon MSK APIs now supports AWS PrivateLink

Amazon Managed Streaming for Apache Kafka (Amazon MSK) APIs now come with AWS PrivateLink support, allowing you to invoke Amazon MSK APIs from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet.\n By default, all communication between your Apache Kafka clients and your Amazon MSK provisioned clusters is private, and your data never traverses the internet. With this launch, clients can also invoke MSK APIs via a private endpoint. This allows client applications with strict security requirements to perform MSK specific actions, such as fetching bootstrap connection strings or describing cluster details, without needing to communicate over a public connection . AWS PrivateLink support for Amazon MSK is available in all AWS Regions where Amazon MSK is available. To get started, follow the directions provided in the AWS PrivateLink documentation. To learn more about Amazon MSK, visit the Amazon MSK documentation.

AWS Incident Detection and Response now available in Japanese

Starting today, AWS Incident Detection and Response supports incident engagement in Japanese language. AWS Incident Detection and Response offers AWS Enterprise Support customers proactive engagement and incident management for critical workloads. With AWS Incident Detection and Response, AWS Incident Management Engineers (IMEs) are available 24/7 to detect incidents and engage with you within five minutes of an alarm from your workloads, providing guidance for mitigation and recovery.\n This feature allows AWS Enterprise Support customers to interact with Japanese-speaking Incident Management Engineers (IMEs) who will provide proactive engagement and incident management for critical incidents. To use this service in Japanese, customers must select Japanese as their preferred language during workload onboarding. For more details, including information on supported regions and additional specifics about the AWS Incident Detection and Response service, please visit the product page.

AWS Chatbot adds support to centrally manage access to AWS accounts from Slack and Microsoft Teams with AWS Organizations

AWS announces general availability of AWS Organizations support in AWS Chatbot. AWS customers can now centrally govern access to their accounts from Slack and Microsoft Teams with AWS Organizations.\n This launch introduces chatbot management policy type in AWS Organizations to control access to your organization’s accounts from chat channels. Using Service Control Policies (SCPs), customers can also globally enforce permission limits on CLI commands originating from chat channels. With this launch, customers can use chatbot policies and multi-account management services in AWS Organizations to determine which permissions models, chat applications, and chat workspaces can be used to access their accounts. For example, you can restrict access to production accounts from chat channels in designated workspaces/teams. Customers can also use SCPs to specify guardrails on the CLI command tasks executed from chat channels. For example, you can specify deny all rds: delete-db-cluster CLI actions originating from chat channels. AWS Organizations support in AWS Chatbot is available at no additional cost in all AWS Regions where AWS Chatbot is offered. Visit the Securing your AWS organization in AWS Chatbot documentation and blog to learn more.

Amazon EMR Serverless introduces Job Run Concurrency and Queuing controls

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce job run admission control on Amazon EMR Serverless with support for job run concurrency and queuing controls.\n Job run concurrency and queuing enables you to configure the maximum number of concurrent job runs for an application and automatically queues all other submitted job runs. This prevents job run failures caused when API limits are exceeded due to a spike in job run submissions or when resources are exhausted either due to an account or application’s maximum concurrent vCPUs limit or an underlying subnet’s IP address limit being exceeded. Job run queuing also simplifies job run management by eliminating the need to build complex queuing management systems to retry failed jobs due to limit errors (e.g., maximum concurrent vCPUs, subnet IP address limits etc.). With this feature, jobs are automatically queued and processed as concurrency slots become available, ensuring efficient resource utilization and preventing job failures. Amazon EMR Serverless job run concurrency and queuing is available in all AWS Regions where AWS EMR Serverless is available, including the AWS GovCloud (US) Regions and excluding China regions. To learn more, visit Job concurrency and queuing in the EMR Serverless documentation.

Amazon S3 adds Service Quotas support for S3 general purpose buckets

You can now manage your Amazon S3 general purpose bucket quotas in Service Quotas. Using Service Quotas, you can view the total number of buckets in an AWS account, compare that number to your bucket quota, and request a service quota increase.\n You can get started using the Amazon S3 page on the Service Quotas console, AWS SDK, or AWS CLI. Service Quotas support for S3 is available in the US East (N. Virginia) and China (Beijing) AWS Regions. To learn more about using Service Quotas with S3 buckets, visit the S3 User Guide.

NICE DCV renames to Amazon DCV and releases version 2024.0 with support for Ubuntu 24.04

Amazon announces DCV version 2024.0. In this latest release, NICE DCV has been renamed to Amazon DCV. The new DCV version introduces several enhancements, including support for Ubuntu 24.04 and enabling the QUIC UDP protocol by default. Amazon DCV is a high-performance remote display protocol designed to help customers securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs.\n Amazon DCV version 2024.0 introduces the following updates, features, and improvements:

Renames to Amazon DCV. NICE DCV is now renamed as Amazon DCV. Additionally, Amazon has consolidated the WorkSpaces Streaming Protocol (WSP), used in Amazon WorkSpaces, with Amazon DCV. The renaming does not affect customer workloads, and there is no change to folder paths and internal tooling names.

Supports Ubuntu 24.04, the latest LTS version of Ubuntu with the latest security patches and updates, providing improved stability and reliability. Additionally, the DCV client on Ubuntu 24.04 now natively supports Wayland, providing better performance through more efficient graphical rendering.

Enables the QUIC UDP protocol by default, allowing end users to receive an optimized streaming experience.

Adds the ability to blank the Linux host screen when a remote user is connected to the Linux server in a console session, preventing users physically present near the server from seeing the screen and interacting with the remote session using the input devices connected to the host.

For more information, please see the Amazon DCV 2024.0 release notes or visit the Amazon DCV webpage to get started with DCV.

Amazon Bedrock Knowledge Bases now provides option to stop ingestion jobs

Today, Amazon Bedrock Knowledge Bases is announcing the general availability of the stop ingestion API. This new API offers you greater control over data ingestion workflows by allowing you to stop an ongoing ingestion job that you no longer want to continue.\n Earlier, you had to wait for the full completion of an ingestion job, even in cases where you no longer desired to ingest from the data source or needed to make other adjustments. With the introduction of the new “StopIngestionJob” API, you can now stop an in-progress ingestion job with a single API call. For example, you can use this feature to quickly stop an ingestion job you accidentally initiated, or if you want to change the documents in your data source. This enhanced flexibility enables you to rapidly respond to changing requirements and optimize your costs. This new capability is available across all AWS Regions where Amazon Bedrock Knowledge Bases is available. To learn more about stopping ingestion jobs and the other capabilities of Amazon Bedrock Knowledge Bases, please refer to the documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Cloud Operations Blog

AWS Big Data Blog

AWS Contact Center

Containers

AWS Database Blog

AWS for Industries

AWS Machine Learning Blog

AWS for M&E Blog

Networking & Content Delivery

AWS Security Blog

AWS Storage Blog

Open Source Project

AWS CLI

Amplify UI

Bottlerocket OS

AWS Load Balancer Controller

Karpenter