1/22/2025, 12:00:00 AM ~ 1/23/2025, 12:00:00 AM (UTC)

Recent Announcements

Amazon CloudWatch Observability add-on launches one step onboarding for EKS workloads

You can now enable Amazon CloudWatch Observability add-on on your EKS cluster with 1-click when provisioning. With 1-click enablement, CloudWatch Observability add-on turns on CloudWatch Container Insights and Application Signals together, enabling you to the understand the health and performance of your applications out of the box. CloudWatch Observability add-on integrates with EKS Pod Identity such that you can simply create a recommended IAM role for the add-on and re-use it across your clusters when creating them, saving you time and effort.\n Previously, you need to create your clusters first, wait for their status to become active and install the CloudWatch Add-on next while managing the required IAM permissions separately. With this launch, you can now install the Amazon CloudWatch Observability add-on when creating your clusters and launch them observability enabled, making observability telemetry available in CloudWatch out of the box. You can then use curated dashboards from CloudWatch Application Signals and CloudWatch Container Insights to take proactive actions in reducing application disruptions by isolating anomalies and troubleshooting faster. CloudWatch Observability add-on is now available on Amazon EKS in all commercial AWS Regions, including the AWS GovCloud (US) Regions. You can install, configure, and update the add-on with just a few clicks in the Amazon EKS console, APIs, AWS Command Line Interface (AWS CLI), and AWS CloudFormation. To get started, follow the user guide.

AWS Marketplace introduces 8 decimal place precision for usage pricing

AWS Marketplace sellers can now price usage rates with up to 8 decimal places. This enhancement improves the precision of pay-as-you-go pricing where per-unit costs can be fractions of a cent ($0.00000001), enabling more accurate billing calculations for customers.\n Previously, AWS Marketplace sellers were limited to using only 3 decimal places for usage pricing, restricting flexibility in pricing pay-as-you-go products. This increased precision gives sellers more control over pricing strategies. Sellers can now set more granular per-unit costs (for example, per megabyte or gigabyte), allowing for more accurate billing. This also benefits sellers operating in different currencies, allowing them to set more accurate equivalent US dollar (USD) prices in AWS Marketplace. Additionally, sellers can maintain specific profit margins with greater precision. For example, resellers can set a retail price of $0.0033 to maintain an exact 10% margin on a $0.003 wholesale price. These improvements offer sellers greater control and precision in pricing, leading to more granular rates for customers and improved profitability for sellers, especially in markets where small price differences matter. This feature is available for software as a service (SaaS), server, and AWS Data Exchange products in all AWS Regions where AWS Marketplace is available. To learn more, access AWS Marketplace Product Pricing documentation and AWS Marketplace API documentation. Start using this feature through the AWS Marketplace Management Portal or the AWS Marketplace Catalog API.

AWS IoT SiteWise now supports null and NaN data types

Today, Amazon Web Services, Inc. announces that AWS IoT SiteWise now supports NULL and NaN (Not a Number) data of bad or uncertain data quality from industrial data sources. AWS IoT SiteWise is a managed service that makes it easy to collect, store, organize, and analyze data from industrial equipment at scale. This new feature enhances the services capability to handle a wider range of data, improving its utility for industrial applications.\n With this update, AWS IoT SiteWise now collects, stores, and retrieves real-time or historical NULL values for all supported data types. It also supports NaN values of double data type. Capturing NULL and NaN data is critical for various industrial use cases, including compliance reporting, observability, and downstream analytics, while also simplifying data set conditioning and cleaning for advanced analytics and machine learning applications. This new feature is available in all AWS Regions where AWS IoT SiteWise is available. To learn more about data ingestion and processing data quality on AWS IoT SiteWise, see AWS IoT SiteWise Documentation.

CloudWatch provides execution plan capture for Aurora PostgreSQL

Amazon CloudWatch Database Insights now collects the query execution plans of top SQL queries running on Aurora PostgreSQL instances, and stores them over time. This feature helps you identify if a change in the query execution plan is the cause of performance degradation or a stalled query. Execution plan capture for Aurora PostgreSQL is available exclusively in the Advanced mode of CloudWatch Database Insights.\n A query execution plan is a sequence of steps that database engines use to retrieve or modify data in a relational database management system (RDBMS). The RDBMS query optimizers may not always choose the most optimal execution plan from a set of alternative ways to execute a given query. Hence, database users sometimes need to manually examine and tune the plans to improve performance. This feature allows you to visualize multiple plans of a SQL query and compare them. It can help you determine if a change in performance of a SQL query is due to a different query execution plan within minutes. You can get started with this feature by enabling Database Insights Advanced mode on your Aurora PostgreSQL clusters using the RDS service console, AWS APIs, or the AWS SDK. CloudWatch Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis. CloudWatch Database Insights is available in all public AWS Regions and offers vCPU-based pricing – see the pricing page for details. For further information, visit the Database Insights documentation.

Amazon Connect now provides daily headcount projections in capacity plan downloads

Amazon Connect now provides daily headcount projections in capacity plan downloads, enhancing your ability to review staffing requirements with greater precision. While capacity plans already provided weekly and monthly projections, this launch allows you to access day-by-day headcount requirements for up to 64 weeks into the future. This granular view simplifies key staffing and hiring decisions, such as how many workers to hire while accounting for seasonality and applying different shrinkage assumptions at a day level.\n This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.

AWS Client VPN announces support for concurrent VPN connections

Today, AWS announces the general availability of concurrent VPN connections for AWS Client VPN. This feature allows you to securely connect to multiple Client VPN connections simultaneously, enabling access to your resources across different work environments.\n AWS Client VPN allows your users to securely connect to your network remotely from any location. Previously, you could only connect to one VPN profile at a time. This limited your access to only one network. To access another network, you were required to disconnect and reconnect to a different VPN profile. With this launch, you can connect to multiple VPN profiles simultaneously without switching. For example, software developers using AWS client for VPN can now connect to development, test, and production environment concurrently. This feature allows seamless parallel connections to all required environments, significantly improving productivity for end users. This feature is available only with AWS-supplied Client VPN client version 5.0+. You can download this version following the steps here. This feature and the required client version are available at no additional cost in all AWS regions where AWS Client VPN is generally available. To learn more about Client VPN:

Visit the AWS Client VPN product page

Read the AWS Client VPN documentation

Amazon EC2 introduces provisioning control for On-Demand Capacity Reservations in the AWS GovCloud (US) Regions

Amazon EC2 introduces new capabilities that make it easy for customers to target instance launches on their On-Demand Capacity Reservations (ODCRs). On-Demand Capacity Reservations help you reserve compute capacity for your workloads in a specified Availability Zone for any duration. You can now ensure instance launches are fulfilled exclusively by ODCRs, or prefer unutilized ODCRs before falling back to On-Demand capacity.\n To get started, you can specify your capacity reservation preferences for your EC2 Auto Scaling groups via the AWS Console or the AWS CLI. These preferences can also be configured using EC2 RunInstances API calls. These features are available in both of the AWS GovCloud (US) Regions. To learn more, see the Capacity Reservations user guide and EC2 Auto Scaling user guide.

Amazon Redshift announces support for History Mode for zero-ETL integrations

Today, Amazon Redshift announces the launch of history mode for zero-ETL integrations. This new feature enables you to build Type 2 Slowly Changing Dimension (SCD 2) tables on your historical data from databases, out-of-the-box in Amazon Redshift, without writing any code. History mode simplifies the process of tracking and analyzing historical data changes, allowing you to gain valuable insights from your data’s evolution over time.\n With history mode, you can easily run advanced analytics on historical data, build lookback reports, and perform trend analysis across multiple zero-ETL data sources, including Amazon DynamoDB, Amazon RDS for MySQL, Amazon Aurora MySQL, and Amazon Aurora PostgreSQL. By preserving the complete history of data changes without maintaining duplicate copies across data sources, history mode helps organizations meet data storage requirements while significantly reducing storage needs and operational costs. History mode is available for both existing and new integrations. You can selectively enable historical tracking for specific tables within your integration for enhanced flexibility in your data analysis. To learn more and get started with zero-ETL integration, visit the getting started guides for Amazon Redshift. For more information on history mode and its benefits, visit the documentation.

Amazon DynamoDB introduces warm throughput for tables and indexes in the AWS GovCloud (US) Regions

Amazon DynamoDB now supports a new warm throughput value and the ability to easily pre-warm DynamoDB tables and indexes in the AWS GovCloud (US) Regions. The warm throughput value provides visibility into the number of read and write operations your DynamoDB tables can readily handle, while pre-warming let’s you proactively increase the value to meet future traffic demands.\n DynamoDB automatically scales to support workloads of virtually any size. However, when you have peak events like product launches or shopping events, request rates can surge 10x or even 100x in a short period of time. You can now check your tables’ warm throughput value to assess if your table can handle large traffic spikes for peak events. If you expect an upcoming peak event to exceed the current warm throughput value for a given table, you can pre-warm that table in advance of the peak event to ensure it scales instantly to meet demand. Warm throughput values are available for all provisioned and on-demand tables and indexes at no cost. Pre-warming your table’s throughput incurs a charge. See Amazon DynamoDB Pricing page for pricing details. See the Developer Guide to learn more.

AWS Blogs

AWS Japan Blog (Japanese)

AWS Architecture Blog

AWS Cloud Operations Blog

AWS Database Blog

Desktop and Application Streaming

AWS Machine Learning Blog

AWS for M&E Blog

Networking & Content Delivery

Open Source Project

AWS CLI

Amazon EKS Anywhere