11/12/2024, 12:00:00 AM ~ 11/13/2024, 12:00:00 AM (UTC)

Recent Announcements

Amazon DynamoDB announces user experience enhancements to organize your tables in the AWS GovCloud (US) Regions

Amazon DynamoDB now enables customers to easily find frequently used tables in the DynamoDB console in the AWS GovCloud (US) Regions. Customers can favorite their tables in the console’s tables page for quicker table access.\n Customers can click the favorites icon to view their favorited tables in the console’s tables page. With this update, customers have a faster and more efficient way to find and work with tables that they often monitor, manage, and explore. Customers can start using favorite tables at no additional cost. Get started with creating a DynamoDB table from the AWS Management Console.

Amazon Managed Service for Apache Flink now supports Amazon DynamoDB Streams as a source

Today, AWS announced support for a new Apache Flink connector for Amazon DynamoDB. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon DynamoDB Streams as a new source for Apache Flink. You can now process DynamoDB streams events with Apache Flink, a popular framework and engine for processing and analyzing streaming data.\n Amazon DynamoDB is a serverless, NoSQL database service that enables you to develop modern applications at any scale. DynamoDB Streams provides a time-ordered sequence of item-level changes (insert, update, and delete) in a DynamoDB table. With Amazon Managed Service for Apache Flink, you can transform and analyze DynamoDB streams data in real time using Apache Flink and integrate applications with other AWS services such as Amazon S3, Amazon OpenSearch, Amazon Managed Streaming for Apache Kafka, and more. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to read data from a DynamoDB stream starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink there are no servers and clusters to manage, and there is no compute and storage infrastructure to set up. The Apache Flink repo for AWS connectors can be found here. For detailed documentation and setup instructions, visit our Documentation Page.

Amazon Neptune Serverless is now available in 6 additional AWS Regions

Amazon Neptune Serverless is now available in the Europe (Paris), South America (Sao Paulo), Asia Pacific (Jakarta), Asia Pacific (Mumbai), Asia Pacific (Hong Kong), and Asia Pacific (Seoul) AWS Regions.\n Amazon Neptune is a fast, reliable, and fully managed graph database service for building and running applications with highly connected datasets, such as knowledge graphs, fraud graphs, identity graphs, and security graphs. If you have unpredictable and variable workloads, Neptune Serverless automatically determines and provisions the compute and memory resources to run the graph database. Database capacity scales up and down based on the application’s changing requirements to maintain consistent performance, saving up to 90% in database costs compared to provisioning at peak capacity. With today’s launch, Neptune Serverless is available in 19 AWS Regions. For pricing and region availability, please visit the Neptune pricing page. You can create a Neptune Serverless cluster from the AWS Management console, AWS Command Line Interface (CLI), or SDK. To learn more about Neptune Serverless visit the product page, or the documentation.

Amazon EC2 Capacity Blocks expands to new regions

Today, Amazon Web Services announces that Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML is available for P5 instances in two new regions: US West (Oregon) and Asia Pacific (Tokyo). You can use EC2 Capacity Blocks to reserve highly sought-after GPU instances in Amazon EC2 UltraClusters for a future date for the amount of time that you need to run your machine learning (ML) workloads.\n EC2 Capacity Blocks enable you to reserve GPU capacity up to eight weeks in advance for durations up to 28 days in cluster sizes of one to 64 instances (512 GPUs), giving you the flexibility to run a broad range of ML workloads. They are ideal for short duration pre-training and fine-tuning workloads, rapid prototyping, and for handling surges in inference demand. EC2 Capacity Blocks deliver low-latency, high-throughput connectivity through colocation in Amazon EC2 UltraClusters. With this expansion, EC2 Capacity Blocks for ML are available for the following instance types and AWS Regions: P5 instances in US East (N. Virginia), US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo); P5e instances in US East (Ohio); P4d instances in US East (Ohio) and US West (Oregon); Trn1 instances in Asia Pacific (Melbourne). To get started, visit the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. To learn more, see the Amazon EC2 Capacity Blocks for ML User Guide.

Amazon SageMaker Model Registry now supports defining machine learning model lifecycle stages

Today, we are excited to announce that Amazon SageMaker Model Registry now supports custom machine learning (ML) model lifecycle stages. This capability further improves model governance by enabling data scientists and ML engineers to define and control the progression of their models across various stages, from development to production.\n Customers use Amazon SageMaker Model Registry as a purpose-built metadata store to manage the entire lifecycle of ML models. With this launch, data scientists and ML engineers can now define custom stages such as development, testing, and production for ML models in the model registry. This makes it easy to track and manage models as they transition across different stages in the model lifecycle from training to inference. They can also track stage approval status such as Pending Approval, Approved, and Rejected to check when the model is ready to move to the next stage. These custom stages and approval status help data scientists and ML engineers define and enforce model approval workflows, ensuring that models meet specific criteria before advancing to the next stage. By implementing these custom stages and approval processes, customers can standardize their model governance practices across their organization, maintain better oversight of model progression, and ensure that only approved models reach production environments. This capability is available in all AWS regions where Amazon SageMaker Model Registry is currently available except GovCloud regions. To learn more, see Staging Construct for your Model Lifecycle.

AWS CodeBuild now supports Windows Docker builds in reserved capacity fleets

AWS CodeBuild now supports building Windows docker images in reserved capacity fleets. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.\n Additionally, you can bring in your own Amazon Machine Images (AMIs) in reserved capacity for Linux and Windows platforms. This enables you to customize your build environment including building and testing with different kernel modules, for more flexibility. The feature is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt) where reserved capacity fleets are supported. You can follow the Windows docker image sample to get started. To configure your own AMIs in reserved capacity fleets, please visit reserved capacity documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.

Amazon Managed Service for Apache Flink is now available in Asia Pacific (Kuala Lumpur) Region

Starting today, customers can use Amazon Managed Service for Apache Flink in Asia Pacific (Kuala Lumpur) Region to build real-time stream processing applications.\n Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors. For a list of the AWS Regions where Amazon Managed Service for Apache Flink is available, please see the AWS Region Table. You can learn more about Amazon Managed Service for Apache Flink here.

Announcing financing program for AWS Marketplace purchases for select US customers

Today, AWS announces the availability of a new financing program supported by PNC Vendor Finance, enabling select customers in the United States (US) to finance AWS Marketplace software purchases directly from the AWS Billing and Cost Management console. For the first time, select US customers can apply for, utilize, and manage financing within the console for AWS Marketplace software purchases.\n AWS Marketplace helps customers find, try, buy, and launch third-party software, while consolidating billing and management with AWS. With thousands of software products available in AWS Marketplace, this financing program enables you to buy the software you need to drive innovation. With financing amounts ranging from $10,000 - $100,000,000, subject to credit approval, you have more options to pay for your AWS Marketplace purchases. If approved, you can utilize financing for AWS Marketplace software purchases that have at least 12-month contracts. Financing can be applied to multiple purchases from multiple AWS Marketplace sellers. This financing program gives you the flexibility to better manage your cash flow by spreading payments over time, while only paying financing cost on what you use.

This new financing program supported by PNC Vendor Finance is available in the AWS Billing and Cost Management console for select AWS Marketplace customers in the US, excluding NV, NC, ND, TN, & VT.

To learn more about financing options for AWS Marketplace purchases and details about the financing program supported by PNC Vendor Finance, visit the AWS Marketplace financing page.

Amazon EBS now supports detailed performance statistics on EBS volume health

Today, Amazon announced the availability of detailed performance statistics for Amazon Elastic Block Store (EBS) volumes. This new capability provides you with real-time visibility into the performance of your EBS volumes, making it easier to monitor the health of your storage resources and take actions sooner.\n With detailed performance statistics, you can access 11 metrics at up to a per-second granularity to monitor input/output (I/O) statistics of your EBS volumes, including driven I/O and I/O latency histograms. The granular visibility provided by these metrics helps you quickly identify and proactively troubleshoot application performance bottlenecks that may be caused by factors such as reaching an EBS volume’s provisioned IOPS or throughput limits, enabling you to enhance application performance and resiliency. Detailed performance statistics for EBS volumes are available by default for all EBS volumes attached to a Nitro-based EC2 instance in all AWS Commercial, China, and the AWS GovCloud (US) Regions, at no additional charge.

To get started with EBS detailed performance statistics, please visit the documentation here to learn more about the available metrics and how to access them using NVMe-CLI tools.

Amazon EventBridge announces up to 94% improvement in end-to-end latency for Event Buses

Amazon EventBridge Event Buses announces up to 94% improvement in end-to-end latency for Event Buses, since January 2023, enabling you to handle highly latency sensitive applications, including fraud detection and prevention, industrial automation, and gaming applications. End-to-End latency is measured by the time taken from event ingestion to first event invocation attempt. This lower latency enables you to build highly responsive and efficient event-driven architectures for your time-sensitive applications. You can now detect and respond to critical events more quickly, enabling rapid innovation, faster decision-making, and improved operational efficiency.\n For latency-sensitive mission-critical applications, even small delays can have a big impact. To address this, Amazon EventBridge Event Bus has been able to significantly reduce its average latency from 2235.23ms measured in January 2023, to just 129.33ms measured in August 2024 at P99. This significant improvement in latency allows EventBridge to deliver events in real-time to your mission critical applications. Amazon EventBridge Event Bus’ lower latency is applied by default across all AWS Regions where Amazon EventBridge is available, including the AWS GovCloud (US) Regions, at no additional cost to you. Customers can monitor these improvements through the IngestionToInvocationStartLatency or the end-to-end IngestionToInvocationSuccessLatency metrics available in the EventBridge console dashboard or via Amazon CloudWatch. This benefits customers globally, and ensures consistent low-latency event processing for customers, regardless of your geographic location. For more information on Amazon EventBridge Event Bus, please visit our documentation. To get started with Amazon EventBridge, visit the AWS Console and follow these instructions from the user guide.

Amazon Q Developer Pro tier adds enhanced administrator capabilities to view user activity

The Amazon Q Developer Pro tier now offers administrators greater visibility into the activity from subscribed users. Amazon Q Developer Pro tier administrators can now view user last activity information and enable daily user activity reports.\n Organization administrators can now view the last activity information for each user’s subscription and applications within that subscription, enabling better monitoring of usage. This allows inactive subscriptions to be easily identified through filtering and sorting across all associated applications. Member account administrators can view the last active date specific to the users, applications, and accounts they manage. The last active date is only shown for activity on or after October 30, 2024. Additionally, member account administrators can enable detailed per-user activity reports in the Amazon Q Developer settings by specifying an Amazon S3 bucket where the reports should be published. When enabled, you will receive a daily report in Amazon S3 with detailed user activity metrics, such as the number of messages sent, and AI lines of code generated. To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

AWS Fault Injection Service now generates experiment reports

AWS Fault Injection Service (AWS FIS) now generates reports for experiments, reducing the time and effort to produce evidence of resilience testing. The report summarizes experiment actions and captures application response from a customer-provided Amazon CloudWatch Dashboard.\n With AWS FIS, you can run fault injection experiments to create realistic failure conditions under which to practice your disaster recovery and failover tests. To provide evidence of this testing and your application’s recovery response, you can configure experiments to generate a report that you can download from the AWS FIS Console and that is automatically delivered to an Amazon S3 bucket of your choice. After the experiment completes, you can review the report to evaluate the impact of the experiment on your key application and resource metrics. Additionally, you can share the reports with stakeholders, including your compliance teams and auditors as evidence of required testing. Experiment reports are generally available in all commercial AWS Regions where FIS is available. To get started, you can log into the AWS FIS Console, or you can use the FIS API, SDK, or AWS CLI. For detailed pricing information, please visit the FIS pricing page. To learn more, view the documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Cloud Operations Blog

AWS Big Data Blog

AWS Database Blog

Desktop and Application Streaming

AWS HPC Blog

AWS for Industries

AWS Machine Learning Blog

AWS for M&E Blog

AWS Messaging & Targeting Blog

AWS Security Blog

AWS Storage Blog

Open Source Project

AWS CLI

Amplify for JavaScript

Firecracker

Karpenter