8/18/2025, 12:00:00 AM ~ 8/19/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon RDS io2 Block Express now available in the AWS GovCloud (US) Regions
Amazon RDS io2 Block Express volumes are now available in AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions. Amazon RDS io2 Block Express volumes provide consistent sub-millisecond latency for mission critical workloads.\n Amazon RDS io2 Block Express volumes are designed for all your critical database workloads that demand high performance, high throughput, and consistently low latency. io2 Block Express storage has the lowest p99.9 I/O latency and the best outlier latency control among major cloud providers, making it ideal for the most I/O-intensive, mission-critical database workloads. io2 Block Express supports 99.999% durability, up to 64 TiB volumes, 4,000 MB/s throughput, and up to 256,000 Provisioned IOPS for your most demanding database needs for the same price as Amazon RDS io1 volumes. You can upgrade from an Amazon RDS io1 volume to an Amazon RDS io2 Block Express without any downtime using the ModifyDBInstance API. To learn more about Amazon RDS storage, visit the Amazon RDS User’s Guide. Create or update a fully managed Amazon RDS database with an io2 Block Express volume or modify an existing io1, gp2, or gp3 volume type without disruptions in the Amazon RDS Management Console.
You can now easily deliver task and email based customer experiences on your websites and applications using the new contact form option in the Amazon Connect communication widget. For example, you can add the communication widget to your website and give customers the ability to submit callback requests outside business hours or send emails through webforms.\n Supervisors and managers can configure customer-facing forms using the drag and drop editor and generate code snippets for seamless website integration. This expanded capability gives customers more flexible engagement options while enabling you to manage all engagements through existing Amazon Connect workflows. For region availability, please see the availability of Amazon Connect features by Region. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.
Amazon QuickSight expands limits on calculated fields
Amazon QuickSight has increased the limits on number of calculated fields allowed in an analysis from 500 to 2000, and from 200 to 500 per dataset. This update enables authors and data curators to create more transformations on their data and draw additional complex insights. This is especially useful for authors and data curators who work with really large datasets and cater to multiple end user personas.\n In regions where Amazon Q in QuickSight is available, users can also use natural language to build calculations using Q. The new calculated fields limits are now available in all supported Amazon QuickSight regions. To learn more about calculated fields and other QuickSight limits, visit item limits for analysis.
Amazon Connect now supports recurring activities in agent schedules
Amazon Connect now supports recurring activities in agent schedules, making it easier for you to add repeating events in a few clicks. With this launch, you can now schedule activities such as daily stand-up at 8 a.m. or team meeting every Monday at 9 a.m. as a series that automatically gets added to agent schedules. You can schedule these as individual recurring series for each agent or a shared recurring series across multiple agents. This launch eliminates the need for manually creating each occurrence as a separate activity and ensures timely addition of activities to agent schedules, thus improving manager productivity and ensuring agent schedules are up to date.\n This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
AWS Direct Connect announces new location in Barcelona, Spain
Today, AWS announced the opening of a new AWS Direct Connect location within the Equinix BA1 data center near Barcelona, Spain. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. This site is the first AWS Direct Connect location in Barcelona and the third AWS Direct connect location within Spain. This Direct Connect location offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.\n The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. For more information on the over 143 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Announcing Amazon Aurora MySQL 3.10 as long-term support (LTS) release
Starting today, long-term support (LTS) will also be provided on Aurora MySQL 3.10 (compatible with MySQL 8.0.42) minor version. Database clusters that use LTS releases can stay on the same minor version for at least three years or until end of standard support for the major version, whichever is sooner. During the lifetime of an Aurora MySQL LTS release, new patches introduce fixes for select high severity security and operational issues. These patches don’t include any new features. For more details about Aurora MySQL 3.10, refer to the Aurora MySQL 3.10 launch announcement and release notes.\n This is the second minor version designated as LTS on Aurora MySQL-Compatible Edition 3 in addition to Aurora MySQL 3.04 (compatible with MySQL 8.0.26) minor version. This LTS version does not change the end of life schedules for other LTS versions or engine major versions. For more details about LTS and how to stay on the LTS minor version, refer to LTS documentation. This LTS release is available in all AWS regions where Aurora MySQL is available. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
New streamlined fulfillment experience for AMI-based products in AWS Marketplace
AWS Marketplace now offers a streamlined fulfillment experience for Amazon Machine Image (AMI) and AMI with CloudFormation products across both the AWS Marketplace website and console. Additionally, users can now also access the new launch experience for container products directly within AWS Marketplace console.\n The new fulfillment experience in AWS Marketplace simplifies product deployment for AMI products by combining the configuration and fulfillment experience into a single page, clearly presenting fulfillment options and AWS services available for purchased products. Users will also have access to new resources, including detailed guides from AWS Marketplace Sellers. The new experience is available through both the AWS Marketplace website and console, in all AWS Marketplace supported commercial regions and languages, delivering a consistent experience worldwide. To learn more about the new fulfillment experience for AMI-based products in AWS Marketplace and how it can benefit your organization, visit the AWS Marketplace Buyer Guide or start exploring AMI products in AWS Marketplace today.
Amazon S3 introduces a new way to verify the content of stored datasets
Amazon S3 provides a new way to verify the content of stored datasets. You can efficiently verify billions of objects and automatically generate integrity reports to prove that your datasets remain intact over time using S3 Batch Operations. This capability works with any object stored in S3, regardless of storage class or object size, without the need to restore or download data. Whether you’re verifying objects for data preservation, accuracy checks, or compliance requirements, you can reduce the cost, time, and effort required.\n With S3 Batch Operations, you can create a compute checksum job for your objects. To get started, provide a list of objects (called a manifest) or specify the bucket with filters like prefix or suffix. Then choose “Compute checksum” as the operation type and select from supported algorithms including SHA-1, SHA-256, CRC32, CRC32C, CRC64, and MD5. When the job completes, you receive a detailed report with checksum information for all processed objects. You can use this report for compliance or audit purposes. This capability complements S3’s built-in validation, letting you independently verify your stored data any time. This new data verification, compute checksum operation, is now available in all AWS Regions. For pricing details, visit the S3 pricing page. To learn more, visit the S3 User Guide.
AWS Batch now supports default instance type options
As of today, AWS Batch has introduced two new instance type options for allowed instance types in Compute Environment default-x86_64 (default) and default-arm64. These new options will automatically select the most cost-effective instance type across different generations, based on your job queue requirements, where AWS Batch previously supported only optimal instance type. This makes it easier to run your Batch workloads with newer generation EC2 instance families and can provide better performance at a lower cost. As new instance types become available in a region, they’ll be automatically added to the corresponding default pool.\n To get started, you can select default-x86_64 or default-arm64 in the instanceType parameter for managed compute environments. There is no need to create a new compute environment—the existing ‘optimal’ option (which applies to M, C, and R EC2 instance familes) will continue to be supported and is not being deprecated, no action is needed. However, please be aware that only ENABLED and VALID Compute Environments (CEs) will be automatically updated with new instance types. If you have any DISABLED or INVALID CEs, they will receive updates once they are re-enabled and set to a VALID state. This capability is now available for AWS Batch in all commercial and the AWS GovCloud (US) Regions. To learn more, see our launch blog, visit Batch’s documentation page or the Batch troubleshooting User Guide.
Amazon Bedrock now supports Batch inference for Anthropic Claude Sonnet 4 and OpenAI GPT-OSS models
Anthropic’s Claude Sonnet 4 and OpenAI’s GPT-OSS 120B and 20B models are now available for Batch inference in Amazon Bedrock. With Batch inference, you can run multiple inference requests asynchronously, improving performance on large datasets at 50% of the on-demand inference pricing. Amazon Bedrock offers select foundation models (FMs) from leading AI providers such as Anthropic, OpenAI, Meta, and Amazon for batch inference, making it easier and more cost-effective to process high-volume workloads.\n With Batch inference on Claude Sonnet 4 and OpenAI GPT-OSS models, you can process large datasets for scenarios such as document and customer feedback analysis, bulk content generation (e.g., marketing copy, product descriptions), large-scale prompt or output evaluations, automated summarization of knowledge bases and archives, mass categorization of support tickets or emails, and extraction of structured data from unstructured text—at scale and with lower costs. We’ve optimized our Batch offering to deliver higher overall batch throughput on these newer models compared to previous ones. In addition, you can now track your Batch workload progress at the AWS account level with Amazon CloudWatch metrics. For all models, these metrics include total pending records, processed records and tokens per minute, and for Claude models, they also include tokens pending processing. To learn more about Batch inference in Amazon Bedrock, visit the Batch inference documentation. You can visit Supported Regions and models for batch inference page for more details on supported models and follow Amazon Bedrock API reference to get started with Batch inference.
Amazon S3 Express One Zone now supports resilience testing with AWS Fault Injection Service
Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, now supports resilience testing with AWS Fault Injection Service (FIS). With this launch, you can use the FIS network disruption action to test the failover response and recovery of your latency-sensitive applications in the unlikely event of a disruption to an Availability Zone (AZ) that impairs access to your data. You can use the results of FIS experiments to verify your monitoring, test recovery processes, and improve application resilience.\n With FIS, you can disrupt connectivity to your S3 Express One Zone data in S3 directory buckets, helping you validate application resilience. During the FIS experiment, data plane requests made to directory buckets will timeout. This fault action is also included in the FIS AZ Availability: Power Interruption scenario, so you can test the resilience of your applications when an AZ event impacts multiple AWS services. You can use the updated FIS network disruption action for S3 Express One Zone data in all AWS Regions where the storage class is available. To get started with testing the resilience of applications that store data in S3 Express One Zone, you can use the AWS Management Console, AWS CLI, or FIS API. For pricing information, visit the FIS pricing page. To learn more, visit the AWS FIS user guide.
AWS Blogs
AWS Japan Blog (Japanese)
- Kiro’s pricing plans have been revealed
- Unlock development productivity with Kiro and Model Context Protocol (MCP)
- Weekly Generative AI with AWS — Week of 2025/8/11
- About a case where Amazon Finance Automation built a business data store using an AWS purpose-specific database to run an important financial application
- [Event Report] AWS Summit Japan 2025 Construction and Real Estate Booth Exhibit
- Streamlining inter-service communication in blue/green deployment processes using Amazon ECS Service Connect
- AWS Weekly — 2025/8/11
AWS News Blog
AWS Cloud Operations Blog
AWS Big Data Blog
- Guide to adopting Amazon SageMaker Unified Studio from ATPCO’s Journey
- Achieve low-latency data processing with Amazon EMR on AWS Local Zones
AWS Contact Center
Containers
AWS Database Blog
- Beyond Correlation: Finding Root-Causes using a network digital twin graph and agentic AI
- Demystifying the AWS advanced JDBC wrapper plugins
AWS HPC Blog
The Internet of Things on AWS – Official Blog
Artificial Intelligence
Networking & Content Delivery
- Enhancing Pinterest’s organizational security with a DNS firewall: Part 2
- Enhancing Pinterest’s organizational security with a DNS firewall: Part 1
- Using CloudWatch Alarms and Lambda to catch exceptional traffic
- Securing hybrid workloads using Amazon Route 53 Resolver DNS Firewall