9/4/2025, 12:00:00 AM ~ 9/5/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon EC2 announces AMI Usage to better monitor the use of AMIs
Amazon EC2 introduces AMI Usage, providing new capabilities that allow you to track AMI consumption across AWS accounts and identify resources in your account that are dependent on particular AMIs. This enhanced visibility helps you monitor AMI utilization patterns across your AWS infrastructure and safely manage AMI deregistrations.\n Up until today, you had to write custom scripts to track the use of AMIs across accounts and resources, leading to operational overhead. Now, with AMI Usage, you can generate a report that lists the accounts that are using your AMIs in EC2 instances and launch templates. You can also check utilization of any AMI within your account across multiple resources, including instances, launch templates, Image Builder recipes, and SSM parameters. These new capabilities empower you to maintain clear oversight of AMI usage across your AWS ecosystem, better manage the lifecycle of your AMIs, and optimize costs. AMI Usage is available to all customers at no additional cost in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, and AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US). To learn more, please visit our documentation.
Amazon Neptune Database now supports Public Endpoints for simplified development access
Amazon Neptune Database, a fully managed graph database service, now supports Public Endpoints, allowing developers to connect directly to Neptune databases from their development desktops without complex networking configurations.\n With Public Endpoints, developers can securely access their Neptune databases from outside the VPC, eliminating the need for VPN connections, bastion hosts, or other networking configurations. This feature streamlines the development process while maintaining security through existing controls like IAM authentication, VPC security groups, and encryption in transit. Public Endpoints can be enabled for new or existing Neptune clusters, with engine version 1.4.6 or above, through the AWS Management Console, AWS CLI, or AWS SDK. When enabled, Neptune generates a publicly accessible endpoint that developers can use with standard Neptune connection methods from their development machines. This feature is available at no additional cost beyond standard Neptune pricing and is available today in all AWS Regions where Neptune Database is offered. To learn more, visit the Amazon Neptune documentation.
Validate best practice compliance for SAP with AWS Systems Manager
AWS Systems Manager Configuration Manager now supports SAP HANA, allowing you to automatically test your SAP HANA databases running on AWS against best practices defined in the AWS Well-Architected Framework SAP Lens.\n Keeping SAP optimally configured requires SAP administrators to stay current with best practices from multiple sources including AWS, SAP, and operating system vendors and manually check their configurations to validate adherence. AWS Systems Manager Configuration Manager automatically assesses SAP applications running on AWS against these standards, proactively identifying misconfigurations and recommending specific remediation steps, allowing you to make the necessary changes before potential impacts to business operations. Configuration checks can be scheduled or run on-demand. SSM for SAP Configuration Manager is available in all commercial AWS Regions. To learn more, visit the AWS Systems Manager for SAP documentation.
Amazon Managed Service for Prometheus, a fully managed Prometheus-compatible monitoring service now allows you to view applied quota values and their utilization for your Amazon Managed Service for Prometheus workspaces using AWS Service Quotas and Amazon CloudWatch. This update gives you a comprehensive view of quota utilization across your workspaces.\n AWS Service Quotas allows you to quickly understand your applied service quota values and request increases in a few clicks. With Amazon CloudWatch usage metrics, you can create alarms to be notified when your Amazon Managed Service for Prometheus workspaces approach applied limits and visualize usage in CloudWatch dashboards. Usage metrics for Amazon Managed Service for Prometheus service limits are available at no additional cost and are always enabled. You can access Service Quotas and usage metrics in CloudWatch through the AWS console, AWS APIs, and CLI. These features are available in all AWS regions where Amazon Managed Service for Prometheus is generally available. For detailed information, check out the Amazon Managed Service for Prometheus user guide. To learn more about the service, visit the product page and pricing page.
AWS adds support for three new condition keys to govern API keys for Amazon Bedrock
AWS today launched three new condition keys that help administrators govern API keys for Amazon Bedrock. The new condition keys help you control the generation, expiration, and the type of API keys allowed. Amazon Bedrock supports two types of API keys: short-term API keys valid for up to 12 hours or long-term API keys which are IAM service-specific credentials for use with Bedrock only.\n The new iam:ServiceSpecificCredentialServiceName condition key lets you control what target AWS services are allowed when creating IAM service-specific credentials. For example, you could allow the creation of Bedrock long-term API keys but not credentials for AWS CodeCommit or Amazon Keyspaces. The new iam:ServiceSpecificCredentialAgeDays condition key lets you control the maximum duration of Bedrock long-term API keys at creation. The new bedrock:BearerTokenType condition key let’s you allow or deny Bedrock requests based on whether the API key is short-term or long-term. These new condition keys are available in all AWS Regions. To learn more about using the new condition keys, visit the IAM User Guide or Amazon Bedrock User Guide.
AWS HealthOmics is now available in Asia Pacific (Seoul) Region
Today, AWS announces that AWS HealthOmics private workflows are now available in the Asia Pacific (Seoul) Region, expanding access to fully managed bioinformatics workflows for healthcare and life sciences customers in Korea. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed bioinformatics workflows. HealthOmics enables customers to focus on scientific discovery rather than infrastructure management, reducing time to value for research, drug discovery, and agriculture science initiatives.\n With private workflows, customers can build and scale genomics data analysis pipelines using familiar domain-specific languages including Nextflow, WDL, and CWL. The service provides built-in features such as call caching to resume runs, dynamic run storage that automatically scales with run storage needs, Git integrations for version-controlled workflow development, and third-party container registry support through Amazon ECR pull-through cache. These capabilities make it easier to migrate existing pipelines and accelerate development of new genomics workflows while maintaining full data provenance and compliance requirements. Private workflows are now available in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Israel (Tel Aviv), Asia Pacific (Singapore), and Asia Pacific (Seoul). To get started, see the AWS HealthOmics documentation.
CloudFormation Hooks adds Managed Controls and Hook Activity Summary
AWS CloudFormation Hooks now supports managed proactive controls, enabling customers to validate resource configurations against AWS best practices without writing custom Hooks logic. Customers can select controls from the AWS Control Tower Controls Catalog and apply them during CloudFormation operations. When using CloudFormation, customers can configure these controls to run in warn mode, allowing teams to test controls without blocking deployments and giving them the flexibility to evaluate control behavior before enforcing policies in production. This significantly reduces setup time, eliminates manual errors, and ensures comprehensive governance coverage across your infrastructure.\n AWS also introduced a new Hooks Invocation Summary page in the CloudFormation console. This centralized view provides a complete historical record of Hooks activity, showing which controls were invoked, their execution details, and outcomes such as pass, warn, or fail. This simplifies compliance reporting issues faster.
With this launch, customers can now leverage AWS-managed controls as part of their provisioning workflows, eliminating the overhead of writing and maintaining custom logic. These controls are curated by AWS and aligned with industry best practices, helping teams enforce consistent policies across all environments. The new summary page delivers essential visibility into Hook invocation history, enabling faster issue resolution and streamlined compliance reporting.
The Hook invocation summary page is available in all commercial and GovCloud (US) regions, and control selection is available in all in all commercials regions. To learn more, visit the AWS CloudFormation Proactive Control Hooks and AWS CloudFormation Hooks View Invocations documentations.
ECS Exec is now available in the AWS Management Console
The Amazon ECS console now supports ECS Exec, enabling you to open secure, interactive shell access directly from the AWS Management Console to any running container.\n ECS customers often need to access running containers to debug applications and examine running processes. ECS Exec provides easy and secure access to running containers without requiring inbound ports or SSH key management. Previously, ECS Exec was only accessible through the AWS API, CLI, or SDKs, requiring customers to switch interfaces when troubleshooting in the console. With this new feature, customers can now connect to running containers directly from the AWS Management Console, streamlining troubleshooting workflows.
To get started, you can turn on ECS Exec directly in the console when creating or updating services and standalone tasks. Additional settings like encryption and logging can also be configured at the cluster level through the console. Once enabled, simply navigate to a task details page, select a container, and click “Connect” to open an interactive session through CloudShell. The console also displays the underlying AWS CLI command, which you can customize or copy to use in your local terminal.
ECS Exec console support is now available in all AWS commercial regions. To learn more, visit the ECS developer guide.
PostgreSQL 18 Beta RC1 is now available in Amazon RDS Database Preview Environment
Amazon RDS for PostgreSQL 18 Release Candidate 1 (RC1) is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 18 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 18 RC1 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.\n PostgreSQL 18 includes “skip scan” support for multicolumn B-tree indexes and improves WHERE clause handling for OR and IN conditions. It introduces parallel GIN (Generalized Inverted) Index builds and updates join operations. Observability improvements show buffer usage counts and index lookups during query execution, along with per-connection I/O utilization metric. Please refer the RDS PostgreSQL release documentation for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
AWS Clean Rooms supports configurable compute size for PySpark jobs
AWS Clean Rooms now supports configurable compute size for PySpark, offering customers the flexibility to customize and allocate resources to run PySpark jobs based on their performance, scale, and cost requirements. With this launch, customers can specify the instance type and cluster size at job runtime for each analysis that uses PySpark, the Python API for Apache Spark. For example, customers can use large instance configurations to achieve the performance needed for their complex data sets and analyses, or smaller instances to optimize costs.\n AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
Amazon Connect adds detailed disconnect reasons for improved call troubleshooting
Amazon Connect now offers expanded disconnect reasons to help you better understand why outbound calls failed to connect in your contact center. These enhanced reasons are based on standard telecom error codes that provide deeper call insights and enable faster troubleshooting, reducing the need to create support tickets to understand failure reasons. You’ll benefit from improved reporting capabilities with granular disconnection data and real-time visibility through Contact Trace Records, allowing you to monitor call disconnection patterns more effectively.\n To learn more refer to our public documentation and best practice guide. This new feature is available in all AWS regions where Amazon Connect is available. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.
Amazon Elastic Container Registry (ECR) now supports repository creation templates in the AWS GovCloud (US) Regions. Repository creation templates allow you to configure the settings for the new repositories that Amazon ECR creates on your behalf during pull through cache and replication operations. These settings include encryption, lifecycle policies, access permissions, and tag immutability. Each template uses a prefix to match and apply configurations to new repositories automatically, enabling you to maintain consistent settings across your container registries.\n To learn more about ECR repository creation template, see our documentation.
YouTube
AWS Black Belt Online Seminar (Japanese)
- AWS Database Migration Service Best Practices - Troubleshooting [AWS Black Belt]
- Amazon Connect Outbound Campaign Detailed Explanation [AWS Black Belt]
- About video distribution using AWS Elemental Media Services (basics) [AWS Black Belt]
AWS Blogs
AWS Japan Blog (Japanese)
AWS Compute Blog
- Serverless generative AI architectural patterns – Part 2
- Serverless generative AI architectural patterns – Part 1
Containers
AWS Database Blog
- Reduce your Amazon ElastiCache costs by up to 60% with Valkey and CUDOS
- Getting started with Amazon EC2 bare metal instances for Amazon RDS for Oracle and Amazon RDS Custom for Oracle
Artificial Intelligence
- Build character consistent storyboards using Amazon Nova in Amazon Bedrock – Part 2
- Build character consistent storyboards using Amazon Nova in Amazon Bedrock – Part 1
AWS for M&E Blog
- AWS to show new generative AI, news distribution innovations at IBC 2025
- Reuters and AWS demonstrate next-generation news distribution at IBC 2025