7/16/2025, 12:00:00 AM ~ 7/17/2025, 12:00:00 AM (UTC)
Recent Announcements
Introducing AI agents and tools in AWS Marketplace
AWS Marketplace now offers AI agents and tools from AWS Partners, allowing customers to find and buy third-party AI agent solutions with streamlined procurement and multiple deployment options. Customers can accelerate their discovery of AI agents and agent tools in a centralized catalog, while enjoying the benefits of purchasing through AWS Marketplace, and Partners can quickly bring their AI agent solutions to market.\n Customers can explore AI agent products on the new “AI Agent & Tools” solution page. Using natural language, customers can search and receive results that match their specific use cases. When evaluating solutions, customers can review listings that support model context protocol (MCP) and agent-to-agent (A2A) standard protocols, along with various deployment options, to determine the best-fit solution for their needs. Customers can then purchase and deploy their chosen solutions through various paths, including Amazon Bedrock AgentCore Runtime, or add tools to AgentCore Gateway to accelerate agent development. For AWS Partners, AI Agents and Tools in AWS Marketplace accelerates customer reach and adoption for agentic solutions. By listing their AI agents and tools, Partners can leverage established AWS Marketplace channels to streamline sales, offer flexible pricing, and provide secure AWS deployment options. Partners can categorize their offerings and highlight MCP and A2A protocol support, enhancing discoverability through advanced search and filtering in the AWS Marketplace catalog. Integration with Amazon Bedrock AgentCore services further simplifies deployment for customers, reducing time to value and providing a secure, scalable environment for customers building innovative agentic solutions. Start exploring AI agent solutions in AWS Marketplace. Learn how AWS Partners can start selling by accessing the AWS Marketplace Seller Guide.
AWS API MCP Server now available
Today, AWS announces the developer preview of the AWS API model context protocol (MCP) server, a new tool that enables foundation models (FMs) to interact with any AWS API through natural language by creating and executing syntactically correct and valid CLI commands.\n With the AWS API MCP Server, customers using popular MCP clients can streamline tasks like troubleshooting workloads, managing application deployments, and exploring AWS services and capabilities more easily, by issuing natural language requests that the host FM can translate into API calls. The AWS API MCP Server allows MCP clients to discover supported AWS APIs and make calls to them through the host FM, enabling actions such as inspecting, creating, and modifying AWS resources. The server provides secure access control through AWS Identity and Access Management (IAM) credentials and pre-configured API permissions, ensuring that FMs can only access or perform authorized actions on permitted AWS APIs. The AWS API MCP Server is released as an open source project and available now. Visit the AWS Labs GitHub repository to download, deploy, and start experimenting with natural language interaction with AWS APIs today.
AWS Transform for mainframe introduces enhanced code refactoring and business logic capabilities
AWS Transform for mainframe now offers enhanced reforge and business logic extraction functionality to further streamline mainframe modernization. These new capabilities help organizations reduce modernization time, improve code quality and maintainability, and optimize modernization and migration costs.\n The reforge capability in AWS Transform for mainframe is now generally available, enhancing transformed Java code by restructuring complex methods, adding descriptive comments, optimizing variable usage, and improving code flow. This results in more readable and maintainable code for developers. Additionally, AWS Transform for mainframe’s business logic extraction capability now provides application-level insights, from high-level summaries to detailed business function analysis, complementing the existing file-level business logic extraction, to help users better understand their legacy applications. These capabilities are now available in all AWS Regions where AWS Transform is offered. To learn more, visit the AWS Transform for mainframe product page, read the user guide, or get started in the AWS Transform web experience.
Customize Amazon Nova in Amazon SageMaker AI
Today, Amazon Nova is introducing the most comprehensive suite of model customization capabilities made available for any proprietary model family. Available as ready-to-use recipes on SageMaker AI, these capabilities allow customers to adapt Nova Micro, Nova Lite, and Nova Pro across the model training lifecycle, including pre-training, supervised fine-tuning, and alignment.\n Using these customization techniques, you can adapt Nova models to accurately reflect your proprietary knowledge, workflows, and brand in your generative AI applications while maintaining Nova’s industry-leading price performance and low latency. The techniques include Continued Pre-Training, Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), Proximal Policy Optimization, and Knowledge Distillation — with support for both parameter-efficient and full-model training options across SFT, DPO and Distillation. Nova customization recipes are available in SageMaker training jobs and SageMaker HyperPod, giving you flexibility to select the environment that best fits your infrastructure and scale requirements. You can deploy your customized models on Amazon Bedrock and invoke them via on-demand inference or Provisioned Throughput. On-demand inference is available only with parameter efficient training techniques. Recipes for Amazon Nova on Amazon SageMaker AI are available in US East (N. Virginia). To get started read Amazon Nova user guide and visit the GitHub repository to browse Nova specific SageMaker training recipes.
Amazon Nova Sonic adds language support for French, Italian, German
Amazon Nova Sonic—a speech-to-speech foundation model—now supports French, Italian, and German, expanding on its existing coverage of English and Spanish. This update includes six additional expressive voices, offering both masculine and feminine-sounding options, to help developers create more natural and inclusive conversational AI experiences across a wider range of languages.\n In addition, Amazon Nova Sonic now integrates with LiveKit, an open-source WebRTC platform, and Pipecat, an open-source framework for building voice and multimodal AI agents. These integrations simplify the development of low-latency, real-time voice applications by removing the need to manage complex audio pipelines and streaming infrastructure. As an added capability, Nova Sonic now also supports integrations with Vonage and Twilio, extending deployment flexibility for telephony and communications use cases. Amazon Nova Sonic is a speech-to-speech foundation model that delivers real-time, human-like voice conversations with low latency. Available in Amazon Bedrock via the bidirectional streaming API, the model understands streaming speech in various speaking styles and generates expressive speech responses that dynamically adapt to the prosody of input speech. Amazon Nova Sonic is now available globally on Amazon Bedrock in three AWS Region. To learn more, read the AWS News Blog, Amazon Nova Sonic product page, and User Guide. To get started, visit the Amazon Bedrock Console.
AWS Deadline Cloud now supports Unreal Engine in Service-Managed Fleets
AWS Deadline Cloud has expanded its support for Unreal Engine in its Service-Managed Fleets. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated graphics and visual effects using industry-standard tools for gaming, film, television, web content, and more.\n With this new feature, you can submit Unreal Engine 5.4, 5.5 or 5.6 projects to Deadline Cloud for rendering, without needing to configure or manage compute infrastructure. Once installed, you can easily submit jobs directly from Unreal Engine’s Movie Render Queue. AWS Deadline Cloud automatically handles the provisioning and elastic scaling of compute resources required for rendering your projects. Deadline Cloud Unreal Engine support is available in all AWS Regions where Deadline Cloud is offered. For more information, please visit the Deadline Cloud product page and our Deadline Cloud for Unreal Engine GitHub repository.
AWS Knowledge MCP Server now available (Preview)
Today, AWS announces the preview release of AWS Knowledge Model Context Protocol (MCP) Server, a new tool that surfaces authoritative AWS knowledge in an LLM-compatible format, including documentation, blog posts, What’s New announcements, and Well-Architected best practices.\n AWS Knowledge MCP Server enables clients and foundation models (FMs) that support MCP to ground their responses in trusted AWS context, guidance, and best practices, providing the guidance needed for accurate reasoning and consistent execution, while reducing manual context management. Customers can now focus on business problems instead of searching for information manually.
The server is publicly accessible at no cost and does not require an AWS account. Usage is subject to rate limits. Give your developers and agents access to the most up-to-date AWS information today by configuring your MCP clients to use the AWS Knowledge MCP Server endpoint, and follow the Getting Started guide for setup instructions.
AWS DataSync now supports IPv6
AWS DataSync announces Internet Protocol version 6 (IPv6) support for storage resources. With this launch, customers can now use DataSync to connect to storage resources located on premises or in other clouds, using either IPv4 or IPv6 addresses.\n AWS DataSync is a secure, high-speed file transfer service that simplifies moving data over networks. Customers can now use DataSync to transfer data to and from NFS, SMB, and Object Storage servers configured with IPv6 addresses. With dual-stack (IPv4 and IPv6) support, customers can continue to use DataSync in their environments as they transition their networks from IPv4 to IPv6. IPv6 support is available in all AWS Regions where AWS DataSync is available. To learn more about configuring IPv6 connectivity with AWS DataSync, visit the AWS DataSync User Guide.
Amazon Bedrock Data Automation is now available in 5 additional AWS Regions
Amazon Bedrock Data Automation (BDA) is now generally available in Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai) and Asia Pacific (Sydney).\n BDA is a feature of Amazon Bedrock that enables developers to automate the generation of valuable insights from unstructured multimodal content such as documents, images, video, and audio to build GenAI-based applications. By leveraging BDA, developers can reduce development time and effort, making it easier to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions. BDA can be used as a standalone feature or as a parser in Amazon Knowledge Bases RAG workflows. With this launch, BDA is now available in a total of 7 AWS Regions, including US West (Oregon) and US East (N. Virginia) Regions. To learn more, visit the Bedrock Data Automation product page and the Amazon Bedrock Pricing page.
Amazon SageMaker streamlines S3 Tables workflow experience
Amazon SageMaker has simplified the process of creating, querying, and joining Amazon S3 Tables with data in Amazon S3 general purpose buckets, Amazon Redshift data warehouses, and third-party data sources by allowing customers to create S3 table buckets and the related catalogs without having to navigate between multiple AWS consoles.\n Users can now create tables, load data, and run queries using the Query Editor or Jupyter Notebook within SageMaker Unified Studio. For administrators, the update includes the ability to enable analytics integration with S3 for their AWS account and create custom profiles. Project owners can use these profiles to set up projects with pre-configured catalogs and S3 Tables support, reducing the manual configuration steps required to get started. This updated S3 Tables experience in SageMaker Unified Studio is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Canada(Central), South America (São Paulo), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo),Asia Pacific (Sydney), Europe (Paris), Europe (Stockholm), Europe (London), and Europe (Frankfurt). To get started with the updated S3 Tables workflow in SageMaker Unified Studio, see the Amazon SageMaker documentation.
AWS Glue now supports zero-ETL integrations from Amazon DynamoDB and eight applications to S3 Tables
AWS Glue now supports zero-ETL integration (managed ingestion) from Amazon DynamoDB and eight applications to Amazon S3 Tables, automating the extraction and loading of data into S3 Tables from DynamoDB and applications like Salesforce, SAP, ServiceNow, and Zendesk.\n S3 Tables are purpose-built for storing tabular data at scale, with built-in Apache Iceberg support. You can enable S3 Tables to work with AWS Lake Formation to support various analytics services, including Amazon Athena, Amazon EMR, Amazon Redshift, and AWS Glue. Zero-ETL integrations are fully managed by AWS and minimize the need to build and manage ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load data from DynamoDB tables or from your customer support, relationship management, and ERP applications into your S3 Table-backed data lake for analysis. Zero-ETL integration reduces users’ operational burden and saves weeks of engineering effort needed to design, build, and test data pipelines. Zero-ETL integration from DynamoDB and eight applications to S3 Tables is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Stockholm), Europe (Frankfurt), Europe (Ireland), South America (Sao Paulo), Asia Pacific (Seoul), Europe (London), and Canada (Central) AWS Regions. You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more, visit What is zero-ETL and Glue zero-ETL documentation.
Amazon EBS now provides visibility into EBS volume initialization status
Amazon EBS now provides visibility into the EBS volume initialization status for volumes created from EBS snapshots. You can use this status to determine when your volume becomes fully initialized when restoring from a snapshot and is fully ready to support latency-sensitive applications.\n EBS volumes that are created from EBS snapshots undergo volume initialization, in which the storage blocks from the snapshot must be downloaded from Amazon S3 and written to the volume before you can access them. The volume initialization rate fluctuates throughout the initialization process, which could make completion times unpredictable. During initialization you may notice increased I/O latency and reduced performance. Our new volume initialization status feature allows you to validate when all blocks have been downloaded and written to the volume, enabling fully provisioned performance on your volume. Using this status, you can time your application launches to align with volume initialization completion. Monitor initialization progress in real-time and launch your applications only when volumes are fully ready, ensuring optimal performance from the start. When creating volumes with Provisioned Rate for Volume Initialization, you will also be able to see the estimated completion time for your volume initialization. Volume initialization status is accessible by default for all EBS volumes. It is available in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. You can start using this feature today through the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. To learn more about the new volume initialization status and how to access it, please visit the EBS initialize volume documentation.
Image-to-video generation support for Luma AI’s Ray2 now in Amazon Bedrock
Today, we are excited to announce that Luma AI’s Ray2 model on Amazon Bedrock now supports image-to-video generation.\n This new feature expands upon the text-to-video generation capabilities introduced in January, providing developers with even more powerful tools for creating dynamic video content. With this update, customers can now transform static .jpeg and .png images of up to 25MB into captivating videos using the state-of-the-art Ray2 model, opening up new possibilities for content creation and visual storytelling. The addition of image-to-video generation to Ray2 on Amazon Bedrock empowers developers and content creators to bring their static visuals to life. This feature is particularly valuable for industries such as advertising, entertainment, and e-commerce, where engaging video content can significantly enhance user experience and engagement. By leveraging the power of AI to generate videos from images, businesses can save time and resources while producing high-quality, dynamic content at scale. Luma AI’s Ray2 model is available in the US West (Oregon) AWS Region. To learn more about Ray2 and how to use it in your projects, read the AWS News Blog, visit the Luma AI in Amazon Bedrock page, the Amazon Bedrock console, or check out the Amazon Bedrock documentation.
Announcing Model Context Protocol (MCP) Server for Amazon MSK
Amazon MSK announces a model-context protocol (MCP) based server that allows customers to interact with their Amazon MSK clusters using a standardized natural language interface and agentic applications. Amazon MSK’s MCP server uses Anthropic’s open-source model-context protocol that standardizes how AI-assisted agents interact with external systems, such as databases, knowledge sources, and other microservices. The server provides AI agents with aggregate views of cluster metrics, configuration states, and operational context, delivering built-in understanding of cluster quotas, capacity limits, best practice guidelines, and contextual recommendations based on workload characteristics. This approach enables agents to make informed decisions about cluster modifications with full awareness of constraints and dependencies. Additionally, each interaction is governed by the customer-defined security policies, which limits access to only those agents who have explicit permissions to the APIs required to achieve the desired objective.\n To download and try out the open-source MCP servers for these services locally with your AI-enabled IDE of choice, visit the aws-labs GitHub repository.
Today, AWS introduces the AWS AI League, a program that helps organizations upskill their workforce by combining a fun competition with hands-on learning using AWS AI services such as Amazon SageMaker AI and Amazon Bedrock. The program offers a unique opportunity for both enterprises and developers to gain valuable and practical skills in fine-tuning, model customization and prompt engineering. Enterprises can apply to receive AWS credits to host internal AWS AI League competitions, fostering a culture of innovation within their organizations. Individual developers can also participate in the AWS AI League at select AWS Summits and AWS re:Invent, giving them a chance to compete while engaging with cutting-edge AI technologies and gaining skills crucial for developing advanced AI solutions.\n AWS is committing up to $2 million in AWS credits and a championship prize pool of $25,000 to reward top performers at AWS re:Invent 2025. This significant investment underscores AWS commitment to advancing AI skills across the workforce and accelerating innovation in the field of generative AI. For more information about the AWS AI League and how to participate, please visit the AWS AI League page.
Announcing on-demand deployment for custom Amazon Nova models in Amazon Bedrock
Starting today, customers can use the on-demand deployment option in Amazon Bedrock for Nova models that have been fine-tuned or distilled in Bedrock, or customized in SageMaker AI. Models customized on or after 7/16/2025 will be eligible.\n This enables Bedrock customers to reduce costs by processing requests in real-time without requiring pre-provisioned compute resources. Customers only pay for what they use, helping reduce the need for an always on infrastructure. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. Learn more in documentation here and here.
Amazon CloudWatch adds generative AI observability (Preview)
Amazon CloudWatch now helps you observe generative AI applications and workloads, including agents deployed and operated with Amazon Bedrock AgentCore (Preview), providing insights into AI performance, health, and accuracy. You get an out-of-the-box view into latency, usage, and errors of your AI workloads to detect issues faster in components like model invocations and agents. You can also find issues faster using end-to-end prompt tracing of components like knowledge bases, tools and models. This feature is compatible with popular generative AI orchestration frameworks such as Strands Agents, LangChain, and LangGraph, offering flexibility with your choice of framework.\n With this new feature, Amazon CloudWatch analyzes telemetry data across components of a generative AI application, helping quickly identify the source of errors. For example, you can pinpoint the source of inaccurate responses — whether from gaps in your VectorDB or incomplete RAG system retrials — using end-to-end prompt tracing, curated metrics and logs. This connected view of component interactions helps developers optimize workloads faster to deliver high levels of availability, accuracy, reliability, and quality. Developers can keep AI agents running smoothly by monitoring and assessing the fleet of agents in one place. The agent-curated view is available in the “AgentCore” tab in the CloudWatch console for genAI observability. Generative AI observability is integrated with other CloudWatch capabilities such as Application Signals, Alarms, Dashboards, Sensitive Data Protection, and Logs Insights, helping you seamlessly extend existing observability tools to monitor generative AI workloads. This feature is available in Preview in 4 regions: US East (N.Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Sydney). To learn more, visit documentation. CloudWatch pricing applies for collected and stored telemetry data.
Amazon MSK is now available in Asia Pacific (Taipei) Region
Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now available in Asia Pacific (Taipei) region. Customers can create Amazon MSK Provisioned clusters in this region starting today.\n Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to more quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you spend more time building innovative streaming applications and less time managing Kafka clusters. Visit the AWS Regions page for all the regions where Amazon MSK is available. To get started, see the Amazon MSK Developer Guide.
Amazon EventBridge Scheduler now available in all AWS Regions
Amazon EventBridge Scheduler is now available in all AWS Regions, following recent expansions to nine additional regions including the AWS GovCloud (US) Regions. This serverless scheduler enables you to create and manage scheduled tasks and events at scale without provisioning or managing infrastructure. With EventBridge Scheduler, you can create billions of scheduled events that run across more than 270 AWS services, set up one-time or recurring schedules, and leverage flexible scheduling options with support for time zones and daylight savings.\n EventBridge Scheduler simplifies task automation for various use cases, including IT process automation, scheduling in applications, and managing schedules for global organizations. You can use pre-built integrations with AWS services, configurable retry policies, and central schedule management. You can easily create, manage, and maintain all your schedules in one central location, improving efficiency and reducing the complexity of scheduling tasks. To learn more about Amazon EventBridge Scheduler and its capabilities, see the Amazon EventBridge Scheduler product page and user guide.
Amazon Bedrock AgentCore now available in preview
Amazon Bedrock AgentCore enables developers to deploy and operate AI agents with the scale, reliability, and security critical to real-world applications. It provides purpose-built infrastructure to scale agents securely, powerful tools to enhance agent capabilities, and essential controls to ensure trustworthy operations. AgentCore services are modular and composable, allowing them to be used together or independently. They work with any model—in or outside of Amazon Bedrock—and any open-source agent framework, eliminating the trade-off between open-source flexibility and enterprise-grade security.\n Amazon Bedrock AgentCore include services and tools that address the barriers to moving agents from proof of concept to production. AgentCore Runtime provides complete session isolation with low latency and supports long-running workloads up to 8 hours. AgentCore Memory enables agents to maintain both short-term and long-term memory across interactions with zero infrastructure management. AgentCore Gateway simplifies tool integration and discoverability, enabling developers to convert existing APIs and services into Model Context Protocol (MCP)-compatible tools with minimal code. AgentCore Browser Tool provides a secure, cloud-based browser runtime so agents can interact with web-based services and perform complex web tasks. AgentCore Code Interpreter offers a secure, sandbox environment so agents can execute code across multiple languages. AgentCore Observability provides real-time visibility into end-to-end agent execution and key operational metrics through dashboards powered by Amazon CloudWatch and is OpenTelemetry compatible. AgentCore Identity allows users to invoke agents by integrating with existing identity providers such as Amazon Cognito, Microsoft Entra ID, and Okta and enables agents to then securely access AWS resources and third-party tools and services.
The preview of Amazon Bedrock AgentCore is currently available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Frankfurt). Learn more about Amazon Bedrock AgentCore and it’s services in the News Blog and explore in-depth implementation details in the AgentCore documentation. For pricing information, visit the Amazon Bedrock AgentCore Pricing.
Amazon S3 Batch Operations now supports managing objects within an S3 bucket, prefix, suffix, or more, in a single step in the AWS GovCloud (US) Regions. When creating an S3 Batch Operation, customers specify the objects on which to perform the operation. With this feature, you have the option to instead specify an entire bucket, prefix, suffix, creation date, or storage class. Amazon S3 Batch Operations will then quickly apply the operation to all the matching objects and notify you when the job completes.\n S3 Batch Operations lets you easily perform one-time or recurring batch workloads such as copying objects between staging and production buckets, invoking an AWS Lambda function to convert file types, or restoring archived backups from S3 Glacier storage classes, at any scale. After starting your job, S3 Batch Operations automatically processes all of the objects that match your filtering criteria. You will have full visibility into your job’s progress, including the running time and percentage of objects completed. You can also receive a detailed completion report with the status of each object once the job completes. You can get started through the AWS Command Line Interface (CLI) or the AWS Software Development Kit (SDK) client. For pricing information, please visit the Management & Insights tab of the Amazon S3 pricing page. To learn more about S3 Batch Operations, visit the S3 User Guide.
AWS Free Tier now offers $200 in credits and 6-month free plan to explore AWS at no cost
Today, AWS announces enhancements to its Free Tier program, offering new customers up to $200 in AWS credits to evaluate over 200 services. This program benefits a wide range of users, including cloud professionals, software developers, students, and early entrepreneurs to gain hands-on experience with AWS services, develop new skills, and build proof-of-concepts. With the new AWS Free Tier, new customers can explore AWS’s extensive portfolio of services without incurring costs, making it easier to get started with AWS.\n As part of the enhanced Free Tier program, new customers receive $100 in AWS credits upon sign-up and can earn an additional $100 in credits by using services such as Amazon EC2 and Amazon Bedrock. It provides customers access to greater number of AWS services, while providing them control over the transition to paid usage. In addition to the ability to apply credits to paid services, customers continue to have access to over 30 always-free services. Additionally, the new Free Tier is integrated with AWS’s suite of Cloud Financial Management tools, making it easy to monitor and forecast future usage. Customers can get started with the new AWS Free Tier program features by selecting the free account plan during sign-up. The free account plan expires either 6 months after sign-up or when Free Tier credits are depleted, whichever comes first. When ready, customers can easily upgrade to the paid plan with a single click to get access to more services and continue building on AWS. The new AWS Free Tier features are generally available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more, visit AWS Free Tier website and AWS Free Tier documentation.
Amazon Redshift announces support for cascading refresh of nested materialized views
Amazon Redshift now supports cascading refresh of nested materialized views (MVs) that are defined on local Amazon Redshift tables and external streaming sources such as Amazon Kinesis Data Streams (KDS), Amazon Managed Streaming for Apache Kafka (MSK) or Confluent Cloud.\n With this update, customers can now run cascading refresh of nested MVs with a single option to specify ‘cascade’ or ‘restrict’. The ‘restrict’ option limits the MV refresh to the single targeted MV, while the refresh with ‘cascade’ option run on the target MV will trigger a cascading refresh of all nested MVs below the target MV in a single transaction. Here’s an example: CREATE TABLE t(a INT); CREATE MATERIALIZED VIEW u AS SELECT * FROM t; CREATE MATERIALIZED VIEW v AS SELECT * FROM u; CREATE MATERIALIZED VIEW w AS SELECT * FROM v; – w -> v -> u -> t INSERT INTO t VALUES (1);
The following example shows an informational message when you run REFRESH MATERIALIZED VIEW on a materialized view that depends on an out-of-date materialized view. REFRESH MATERIALIZED VIEW v; INFO: Materialized view v is already up to date. However, it depends on another materialized view that is not up to date. REFRESH MATERIALIZED VIEW v CASCADE; INFO: Materialized view v was incrementally updated successfully.
In the example above with ‘cascade’ refresh option, MV ‘u’ is refreshed first and then MV ‘v’ is refreshed next in that order, while MV ‘w’ is not refreshed.
Cascading refresh greatly simplifies application development by eliminating complex logic that was previously required for coordinating manual refresh of several nested materialized views. You can start using this new capability immediately to build more complex and flexible analytics pipelines. To get started, refer to the Nested materialized views sub-section of the Refreshing a materialized view section of the documentation.
Amazon Corretto July 2025 Quarterly Updates
On July 15, 2025 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) and Feature Release (FR) versions of OpenJDK. Corretto 24.0.2, 21.0.8, 17.0.16, 11.0.28, 8u462 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK.\n Click on the Corretto home page to download Corretto 24, Corretto 21, Corretto 17, Corretto 11, or Corretto 8. You can also get the updates on your Linux system by configuring a Corretto Apt or Yum repo. Feedback is welcomed!
AWS Cost Anomaly Detection improves accuracy with model enhancements
AWS announces significant improvements to AWS Cost Anomaly Detection, enhancing its ability to identify meaningful changes in your AWS spending patterns. This update delivers more consistent and reliable cost monitoring by better handling historical cost variations.\n With this improvement, Cost Anomaly Detection better understands your organization’s typical spend patterns. It distinguishes between one-time and recurrent cost events, while maintaining accuracy in detecting cost changes that require your attention These improvements are automatically applied to all AWS Cost Anomaly Detection monitors in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about AWS Cost Anomaly Detection and this enhancement, visit the AWS Cost Anomaly Detection product page and User Guide.
AWS Blogs
AWS Japan Blog (Japanese)
- [Event Report] AWS Summit Japan Exhibition: Achieve realistic product fitting and placement with Amazon Nova Canvas’s virtual try-on, and reduce return rates
- [Event Report] AWS Summit Japan 2025 ~ Retail Consumer Goods Industry Booth
- [Event Report] A New Customer Service Experience Using 3D Avatars and Multi-AI Agents
- Use Amazon S3 Intelligent-Tiering to manage Amazon S3 storage costs at granular scale
AWS Japan Startup Blog (Japanese)
AWS News Blog
- Top announcements of the AWS Summit in New York, 2025
- Announcing Amazon Nova customization in Amazon SageMaker AI
- Introducing Amazon Bedrock AgentCore: Securely deploy and operate AI agents at any scale (preview)
AWS Cloud Financial Management
- Simplify Departmental Cost Allocation with AWS Organizations and Lambda
- AWS Price List Gets a Natural Language Upgrade: Introducing the AWS Pricing MCP Server
AWS Big Data Blog
- Unifying metadata governance across Amazon SageMaker and Collibra
- Compaction support for Avro and ORC file formats in Apache Iceberg tables in Amazon S3
AWS Compute Blog
Containers
- Under the hood: Amazon EKS ultra scale clusters
- Amazon EKS enables ultra scale AI/ML workloads with support for 100K nodes per cluster
AWS Database Blog
AWS for Industries
Artificial Intelligence
- Accenture scales video analysis with Amazon Nova and Amazon Bedrock Agents
- Deploy conversational agents with Vonage and Amazon Nova Sonic
- Enabling customers to deliver production-ready AI agents at scale