7/22/2025, 12:00:00 AM ~ 7/23/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon Timestream for InfluxDB now supports 24xlarge memory-optimized instances
Amazon Timestream for InfluxDB now offers 24xlarge memory-optimized instances, providing enhanced performance for demanding time-series workloads. This new instance type is generally available for both Single-AZ and Multi-AZ deployments, as well as Multi-AZ Read Replica clusters, enabling customers to scale their time-series database solutions.\n The 24xlarge instance delivers 96 vCPU, 768 GiB of memory, and up to 40 Gbps of enhanced network bandwidth. This makes it ideal for large-scale, I/O-intensive time-series applications that require fast response times at scale, such as industrial telemetry, IoT analytics, and financial trading platforms. This feature is now available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Spain), and Europe (Stockholm).
You can provision 24xlager memory-optimized instances from the Amazon Timestream console, AWS Command lin Interface (CLI), or SDK, and AWS CloudFormation. To learn more, visit the product page, documentation, and pricing page.
Amazon EBS io2 Block Express supports all commercial and AWS GovCloud (US) Regions
Amazon EBS io2 Block Express volumes are now available in all commercial and AWS GovCloud (US) Regions, except China regions.\n io2 Block Express leverage the latest generation of EBS storage server architecture designed to deliver consistent sub-millisecond latency and 99.999% durability. With a single io2 Block Express volume, you can achieve 256,000 IOPS, 4GiB/s throughput, and 64TiB storage capacity. You can also attach an io2 Block Express volume to multiple instances in the same Availability Zone, supporting shared storage fencing through NVMe reservations for improved application availability and scalability. With the lowest p99.9 I/O latency among major cloud providers, io2 Block Express is the ideal choice for the most I/O-intensive, mission-critical deployments such as SAP HANA, Oracle, SQL Server, and IBM DB2.
Customers using io1 volumes can upgrade to io2 Block Express without any downtime using the ModifyVolume API to achieve 100x durability, consistent sub-millisecond latency, and significantly higher performance at the same or lower cost than io1. With io2 Block Express, you can drive up to 4x IOPS and 4x throughput at the same storage price as io1, and up to 50% cheaper IOPS cost for volumes over 32,000 IOPS. You can use AWS Compute Optimizer and AWS Cost Optimization Hub to recommend the optimal io2 volume performance required by your workloads, based on the io1 utilization data collected by Amazon CloudWatch.
You can create and manage io2 Block Express volumes using the AWS Management Console, the AWS CLI, or the AWS SDKs. For more information on io2 Block Express, see our product overview page.
AWS Audit Manager enhances evidence collection for better compliance insights
Today, AWS Audit Manager announces it has updated 14 standard frameworks, to enhance evidence collection capabilities and help customers meet their compliance requirements while optimizing costs. This update improves evidence relevance across key frameworks like SOC 2 and PCI DSS v4.0, and enhances framework coverage for better compliance validation.\n These updates will streamline number of findings for most customers and reduce associated costs. The cost reduction will depend on number of AWS resources a customer uses, the frameworks they’re assessing, and the degree of overlapping controls between these frameworks. If you are an existing Audit Manager customer and created assessments for these frameworks on or after June 6th, 2024 - no action is required to use these updated standard frameworks, if you created assessments for these frameworks prior to June 6th, 2024 - please create new assessments for these frameworks to receive the latest updates. For more information about the updated frameworks, see the documentation. To learn more about AWS Audit Manager availability and pricing, see the AWS Region table and AWS Audit Manager pricing. To learn more about AWS Audit Manager, see aws.amazon.com/audit-manager.
AWS Deadline Cloud now supports connecting resources in your Amazon Virtual Private Cloud (VPC), like shared storage or a license server, to your service-managed fleets. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated graphics and visual effects for films, television, broadcasting, web content, and design.\n Render farm workers need access to the storage locations that contain the input files necessary to process a job, and to the locations that store the output. Extending the existing capability to use S3 based storage, like the Deadline Cloud job attachments feature, resource endpoints (powered by AWS PrivateLink) make it easy to securely connect high-performant file systems, like Amazon FSx or Qumulo, to your Deadline Cloud service-managed fleets. Workers also need access to licenses to complete jobs when using licensed software and resource endpoints simplify bringing your own licenses to service-managed fleets. Resource endpoints for service-managed fleets is available in all AWS Regions where AWS Deadline Cloud is offered. To learn more, see the AWS Deadline Cloud documentation. For pricing information, visit the Deadline Cloud pricing page.
AWS Client VPN extends availability to two additional AWS Regions
AWS Client VPN is now available in two new Asia Pacific Regions: Malaysia and Thailand. This fully managed service enables customers to securely connect their remote workforce to resources in AWS or on-premises networks.\n
AWS Client VPN eliminates the need for hardware VPN appliances and complex operational management through its pay-as-you-go model. Organizations can easily manage and monitor VPN connections through a single console.
To learn more about Client VPN:
Visit the AWS Client VPN product page.
Read the AWS Client VPN documentation.
AWS Client VPN pricing page.
Simplify AWS Organization Tag Policies using new wildcard statement
AWS Organizations Tag Policies announces wildcard support for Tag Policies using ALL_SUPPORTED in the Resource element. With this, you can simplify your policy authoring experience and reduce your policy size. You can now specify that your Tag Policy applies to all supported resource types for a given AWS service in a single line, instead of individually adding them to your policy.\n Tag Policies enable you to enforce consistent tagging across your AWS accounts with proactive compliance, governance and control. For example, you can define a policy that all EC2 instances with “Environment” tag key must use only “Prod” or “Non-Prod” values. Previously, you had to list each EC2 resource type individually in a Tag Policy, such as instances, volumes, and snapshots. With ALL_SUPPORTED wildcard, you can now apply the same rule to all supported EC2 or S3 resource types in a single line. You can use this feature via AWS Management Console, AWS Command Line Interface, and AWS Software Development Kit. This feature is available with AWS Organizations Tag Policies in AWS Regions where Tag Policies is available. To learn more, visit Tag Policies documentation.
IAM Access Analyzer supports additional analysis findings and checks in AWS GovCloud (US) Regions
AWS Identity and Access Manager (IAM) Access Analyzer now supports unused access findings, internal access findings, and custom policy checks in the AWS GovCloud (US-East and US-West) Regions to help guide you towards least privilege.\n IAM Access Analyzer continuously analyzes your accounts to identify unused access and surfaces findings to highlight unused roles, unused access keys for IAM users, and unused passwords for IAM users. For active IAM roles and users, the findings provide visibility into unused services and actions. With internal access findings, you can identify who within your AWS organization has access to your Amazon S3, Amazon DynamoDB, or Amazon Relational Database Service (RDS) resources. It uses automated reasoning to evaluate all identity policies, resource policies, service control policies (SCPs), and resource control policies (RCPs) to surface all IAM users and roles that have access to your selected critical resources. After the new analyzers are enabled in the IAM console, the updated dashboard highlights your AWS accounts and resources that have the most findings and provides a breakdown of findings by type. Security teams can respond to new findings in two ways: taking immediate action to fix unintended access, or setting up automated notifications through Amazon EventBridge to engage development teams for remediation. Custom policy checks also use the power of automated reasoning to help security teams proactively detect nonconformant updates to policies. For example, IAM policy changes that are more permissive than their previous version. Security teams can use these checks to streamline their reviews, automatically approving policies that conform with their security standards, and inspecting more deeply when they don’t. To learn more about IAM Access Analyzer:
See the documentation
Review the pricing
Amazon MQ now supports Graviton3-based M7g instances for RabbitMQ
Amazon MQ now supports Graviton3 based M7g instances for RabbitMQ in all available regions across both single instance and highly available Multi-AZ cluster deployment modes. Amazon MQ for RabbitMQ cluster brokers running on M7g instances deliver up to 50% higher workload capacity and up to 85% throughput improvements over comparable Amazon MQ for RabbitMQ cluster brokers running on M5 instances.\n Amazon MQ M7g instances are powered by Arm-based AWS Graviton3 processors and are available in a wide range of sizes ranging from M7g.medium recommended for evaluation workloads, and M7g.large to M7g.16xlarge recommended for production workloads. Amazon MQ M7g clusters are provisioned with optimized Amazon EBS disk volumes that vary with the instance size and provide reduction in data storage costs for customers running RabbitMQ workloads on Amazon MQ M5 clusters. Amazon MQ M7g single instance brokers are provisioned with Amazon EBS disk volumes of 200 GB. Refer to Amazon MQ instance types supported to understand the sizes available. You can upgrade your existing RabbitMQ broker from M5 to M7g in-place. M7g instances on Amazon MQ are available today across all generally available regions except Africa (Cape Town), Canada West (Calgary), and Europe (Milan) regions. For information on pricing and regional availability of instance sizes, refer to the Amazon MQ pricing page. . To get started, create a new RabbitMQ broker with M7g instances or upgrade your existing broker now.
Amazon EC2 C6in instances are now available in Canada West (Calgary)
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6in instances are available in AWS Region Canada West (Calgary). These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances.\n Customers can use C6in instances to scale the performance of applications such as network virtual appliances (firewalls, virtual routers, load balancers), Telco 5G User Plane Function (UPF), data analytics, high performance computing (HPC), and CPU based AI/ML workloads. C6in instances are available in 10 different sizes with up to 128 vCPUs, including bare metal size. Amazon EC2 sixth-generation x86-based network optimized EC2 instances deliver up to 100Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth, and up to 400K IOPS. C6in instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. C6in instances are available in these AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Middle East (Bahrain, UAE), Israel (Tel Aviv), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Africa (Cape Town), South America (Sao Paulo), Canada (Central), Canada West (Calgary), and AWS GovCloud (US-West, US-East). To learn more, see the Amazon EC2 C6in instances. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Amazon EMR Serverless adds support for Inline Runtime Permissions for job runs
Amazon EMR Serverless makes it simple to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce support for specifying permissions inline when submitting a job run. This allows you to define fine-grained, tenant-specific permission scopes per job run for multi-tenant use cases.\n When submitting a job run on EMR Serverless, you can specify a runtime role that the job run can assume when calling other AWS services. In multi-tenant environments, such as those managed by SaaS providers, job runs are often submitted on behalf of specific tenants. To ensure security and least privileges, it is necessary to scope down the permissions of the runtime role to the specific context of a tenant for a given job run. Achieving this requires creating a separate role for each tenant with restricted permissions. The proliferation of such roles can push the account limits of IAM as well as get unwieldy to manage. Now you can specify an inline permission policy when submitting a job run in addition to the runtime role. The effective permissions for a job run is the intersection of the inline policy and the runtime role. You can define the fine-grained, tenant-specific permissions for a job run in the inline policy removing the need to manage a growing number of roles in multi-tenant environments as well as easily adjust the policy definition for tenant-specific workloads. This feature is available for all supported EMR releases and in all regions where EMR Serverless is available. To learn more, visit Runtime Policy.
AWS Blogs
AWS Japan Blog (Japanese)
- Simplify serverless development with console and IDE integration and remote debugging with AWS Lambda
- AWS AI League: Learn, Innovate, and Compete in the New Ultimate AI Showdown
- Accelerate secure software releases using Amazon ECS’s new embedded blue/green deployment
- Challenge the next generation of food management at AWS Summit Japan 2025! Smart waste management enabled by AI and IoT
- Amazon SageMaker Announces Customization of Amazon Nova with AI
- Introducing AWS Serverless MCP Server: Using AI to Develop Modern Applications
- Improve productivity with Amazon Bedrock Agents and Powertools for AWS Lambda
- Use the new Amazon EventBridge logging to monitor and debug event-driven applications
- Using Export for vCenter with AWS Transform
- Introducing Amazon Bedrock AgentCore: Securely Deploy and Operate AI Agents at Any Scale (Preview)
AWS Japan Startup Blog (Japanese)
AWS Big Data Blog
- Improve RabbitMQ performance on Amazon MQ with AWS Graviton3-based M7g instances
- Accelerating development with the AWS Data Processing MCP Server and Agent
- Workload management in OpenSearch-based multi-tenant centralized logging platforms
AWS Contact Center
Containers
Desktop and Application Streaming
AWS for Industries
Artificial Intelligence
- Beyond accelerators: Lessons from building foundation models on AWS with Japan’s GENIAC program
- Streamline deep learning environments with Amazon Q Developer and MCP
AWS Security Blog
- Introducing SRA Verify – an AWS Security Reference Architecture assessment tool
- Five facts about how the CLOUD Act actually works