3/25/2026, 12:00:00 AM ~ 3/26/2026, 12:00:00 AM (UTC)
Recent Announcements
AWS Firewall Manager launches in AWS Asia Pacific (New Zealand) Region
AWS Firewall Manager announces that it is now available in AWS Asia Pacific (New Zealand) Region. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules.\n Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of AWS security services for customers hosting their applications and workloads in AWS Taipei. Customers wishing to establish secured assets using AWS WAF can create and maintain security policies with AWS Firewall Manager. To learn more about how AWS Firewall Manager works, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
Accelerate AI-assisted development with Agent Plugin for AWS Serverless
AWS announces the Agent Plugin for AWS Serverless, enabling developers to easily build, deploy, troubleshoot, and manage serverless applications using AI coding assistants like Kiro, Claude Code, and Cursor.\n Agent plugins extend AI coding assistants with structured, reusable capabilities by packaging skills, sub-agents, hooks, and Model Context Protocol (MCP) servers into a single modular unit. The Agent Plugin for AWS Serverless dynamically loads relevant guidance and expertise required throughout the development lifecycle for building production-ready serverless applications on AWS. You can create AWS Lambda functions that integrate with popular event sources like Amazon EventBridge, Amazon Kinesis, and AWS Step Functions, while following built-in best practices for observability, performance optimization, and troubleshooting. As you adopt Infrastructure as Code (IaC), you can streamline project setup with AWS Serverless Application Model (SAM) and AWS Cloud Development Kit (CDK), with reusable constructs, proven architectural patterns, automated CI/CD pipelines, and local testing workflows. For long-running, stateful workflows, you can build with confidence using Lambda durable functions, which provides checkpoint-replay model, advanced orchestration patterns, and error handling capabilities. Lastly, you can design and manage APIs as part of your application using Amazon API Gateway, with guidance across REST APIs, HTTP APIs, and WebSocket APIs. These capabilities are packaged as agent skills in the open Agent Skills format, making them usable across compatible AI tools such as Kiro, Claude Code, and Cursor.
The Agent Plugin for AWS Serverless is available in any AI coding assistant tools that support agent plugins such as Claude Code and Cursor. In Claude Code, you can install it from the official Claude Marketplace using a simple command ‘/plugin install aws-serverless@claude-plugins-official’. You can also install agent skills from the plugin individually in any AI coding assistant tools that support agent skills. To learn more about the plugin and its capabilities, visit GitHub.
AWS Batch now provides AMI status and supports AWS Health Planned Lifecycle Events
AWS Batch now provides enhanced visibility into your compute environments with two new capabilities that help you maintain operational best practices. When you describe a compute environment, you can now see the status of your Batch-provided default Amazon Machine Images (AMIs), indicating when updates are available. Additionally, AWS Batch now publishes AWS Health Planned Lifecycle Events to help you prepare for and track changes affecting your batch computing resources.\n The AMI status indicator shows whether you’re using the latest AMI (LATEST) or if an update is available (UPDATE_AVAILABLE), helping you identify compute environments that may be running outdated AMIs. AWS Health Planned Lifecycle Events provide advance notification of upcoming changes, such as AMI deprecations, help you monitor migration status of your affected compute environments, and automate responses using Amazon EventBridge. AMI status indicator and AWS Health Planned Lifecycle Events are available today in all AWS Regions where AWS Batch is available. For more information, see Managing AMI versions and AWS Health Planned Lifecycle Events pages in the AWS Batch User Guide.
Amazon SageMaker Unified Studio launches support for remote connection from Cursor IDE
Today, AWS announces remote connection from Cursor IDE to Amazon SageMaker Unified Studio via the AWS Toolkit extension. This new capability allows data scientists, ML engineers, and developers to leverage their Cursor setup - including its AI-powered code completion, natural language editing, and multi-file editing capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Cursor to SageMaker Unified Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing AI-assisted development workflows within a single environment for all your AWS analytics and AI/ML services.\n SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Cursor setup - complete with custom rules, extensions, and AI model preferences - while accessing your compute resources and data on Amazon SageMaker. Since Cursor is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration.
This feature is available in all AWS Regions where Amazon SageMaker Unified Studio is available. To learn more, visit the local IDE support documentation..
Amazon Bedrock AgentCore adds support for Chrome policies and custom root CA
Amazon Bedrock AgentCore now enables customers to configure Chrome Enterprise policies for AgentCore Browser and specify custom root Certificate Authority (CA) certificates for both AgentCore Browser and Code Interpreter. These enhancements help ensure enterprise requirements are met when allowing AI agents to operate within organizations that have strict security policies and internal infrastructure using custom certificates.\n With Chrome policies, you can leverage over 100+ configurable policies for managing browser behavior across security, URL filtering, content settings, and more to enforce organizational compliance requirements. For example, restrict agents to specific URLs for kiosk-mode operations, disable password managers and downloads for data-entry tasks, or implement URL blocklists for regulatory compliance. Custom root CA support enables agents to seamlessly connect to internal services like Artifactory, Jira, and finance portals that use SSL certificates signed by your organization’s internal Certificate Authority, and work with corporate proxies performing TLS interception. These features are available in all 14 AWS Regions where Amazon Bedrock AgentCore Browser and Code Interpreter are available: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), and Canada (Central). To learn more, visit the AgentCore Browser documentation.
AWS Batch now supports quota management and preemption for SageMaker Training jobs
AWS Batch now supports quota management with job preemption for SageMaker Training jobs, enabling you to efficiently allocate and share compute resources across your teams and projects. If you’re using GPU capacity in SageMaker Training jobs, you can now intelligently allocate compute resources, prioritize your business-critical training jobs, and automatically preempt lower-priority workloads when your urgent experiments arrive.\n With quota management, you can create up to 20 quota shares per job queue that function as virtual queues with dedicated capacity limits and configurable resource sharing strategies. The service automatically uses cross-share preemption to restore borrowed capacity when the original owner submits jobs, and supports in-share preemption to allow high-priority jobs to preempt lower-priority jobs within the same quota share. You can monitor capacity utilization at the queue, quota share, and job-level granularity, update job priorities after submission to influence preemption decisions, and configure preemption retry limits to control behavior. The feature integrates directly with the SageMaker Python SDK via the aws_batch module. Quota management with job preemption for SageMaker Training jobs is available today in all AWS Regions where AWS Batch is available. For more information, see our Quota Management example notebook on GitHub and the AWS Batch User Guide.
Amazon Route 53 Profiles now supports granular IAM permissions for resource and VPC associations
Amazon Route 53 Profiles now supports granular AWS Identity and Access Management (IAM) permissions, allowing you to control which users can manage specific resource types and VPC associations within your Profiles. With this launch, you can create IAM policies that restrict users to specific operations (associate, disassociate, or update) on individual resource types such as private hosted zones, Resolver rules, or DNS Firewall rule groups. You can also define permissions based on resource ARNs, hosted zone names, Resolver rule domain names, DNS Firewall rule group priority ranges, or specific VPC associations.\n Route 53 Profiles enable you to define a standard DNS configuration that includes private hosted zone associations, Resolver rules, and DNS Firewall rule groups, and apply this configuration to multiple VPCs in your account or share with AWS accounts using AWS Resource Access Manager (RAM). This new capability provides administrators with fine-grained control over Profile management, enabling you to delegate specific responsibilities while maintaining security and governance standards across your organization. This feature is available at no additional charge in all AWS Regions where Route 53 Profiles is available, except in Middle East (Bahrain) and Middle East (UAE). To learn more, see the Amazon Route 53 Profiles documentation and pricing page.
Amazon Aurora PostgreSQL now available with the AWS Free Tier
Amazon Aurora PostgreSQL is now available on the AWS Free Tier, which offers new customers $100 in AWS credits upon sign-up and the ability to earn an additional $100 in credits by using services including Amazon RDS.\n With a Free Plan account, you can create an Aurora PostgreSQL serverless cluster from the Amazon RDS Console, AWS CLI, or AWS SDKs using express configuration, which enables you to create and query an Aurora PostgreSQL database in seconds. To get started, select the Free Plan during new AWS account sign-up.
AWS Free Tier is available in all AWS Regions where Aurora PostgreSQL serverless is supported. For more details, see the Aurora & RDS Free Tier and AWS Free Tier pages.
Amazon Aurora PostgreSQL now supports creating and connecting to a database in seconds
Amazon Aurora PostgreSQL now offers a new experience to create a cluster with express configuration, enabling you to create and query an Aurora serverless database in seconds. With pre-configured settings, the new experience accelerates initial setup and reduces time to first query. You have the flexibility to modify certain settings during creation and most other settings afterward.\n Aurora clusters created using express configuration reside outside a virtual private cloud (VPC) network and include an internet access gateway for secure connections from your favorite development tools - no VPN, or AWS Direct Connect required. The internet access gateway supports the full PostgreSQL wire protocol, enabling connectivity from a broad range of development tools and clients. It is distributed across multiple Availability Zones, providing the same level of high availability as your Aurora cluster. It also sets up AWS Identity and Access Management (IAM) authentication for your administrator user by default, enabling passwordless database authentication from the beginning without additional configuration.
Aurora PostgreSQL serverless is now available with the AWS Free Tier on both the Free and Paid plans. For regional availability and more details, see the Amazon Aurora documentation or read the launch blog. To get started, use the Amazon RDS Console, AWS CLI, or AWS SDKs.
Amazon SageMaker AI now supports serverless reinforcement fine-tuning for 12 additional models
Amazon SageMaker AI now supports serverless model customization and reinforcement fine-tuning for 12 additional open-weight models, enabling you to fine-tune and evaluate them without provisioning or managing infrastructure. The newly supported models are: gpt-oss-120b, Qwen2.5 72B Instruct, DeepSeek-R1-Distill-Llama-70B, Qwen3 14B, DeepSeek-R1-Distill-Qwen-14B, Qwen2.5 14B Instruct, DeepSeek-R1-Distill-Llama-8B, DeepSeek-R1-Distill-Qwen-7B, Qwen3 4B, Meta Llama 3.2 3B Instruct, Qwen3 1.7B, and DeepSeek-R1-Distill-Qwen-1.5B. With this expansion, you can customize these models using supervised fine-tuning (SFT), direct preference optimization (DPO), and reinforcement fine-tuning (RFT) techniques including RLVR and RLAIF, and only pay for what you use.\n Reinforcement fine-tuning enables you to align models to complex, domain-specific reasoning tasks where techniques such as traditional SFT alone fall short. With RLVR, you can improve model accuracy on verifiable tasks such as code generation, math, and structured extraction by providing reward signals based on correctness. RLAIF uses AI-generated feedback to steer model behavior toward your quality and safety preferences. These techniques are available on previously supported and newly added models, with no cluster setup, capacity planning, or distributed training expertise required. These models and fine-tuning techniques are available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and EU (Ireland). To get started, see the Amazon SageMaker AI model customization product page and visit the Amazon SageMaker AI pricing page (Model Customization tab) to see the full list of models, techniques, and prices.
Amazon EC2 I7ie instances now available in additional AWS regions
AWS is announcing starting today, Amazon EC2 I7ie instances are now available in AWS Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Melbourne), Asia Pacific (Thailand), Europe (Zurich), Europe (Milan) and Mexico (Central) regions. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance versus I3en instances.\n I7ie instances offer up to 120TB local NVMe storage density for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.
I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).
To learn more, visit the I7ie instances page.
AWS Backup expands support for Amazon DocumentDB to 12 Regions
AWS Backup now supports Amazon DocumentDB in 12 additional AWS Regions: Asia Pacific (Malaysia, Thailand, Osaka, Hong Kong, Jakarta, Melbourne), Europe (Stockholm, Spain, Zurich), Africa (Cape Town), Israel (Tel Aviv), and Mexico (Central).\n This expansion brings policy-based data protection and recovery to your Amazon DocumentDB clusters in these newly supported Regions.
To start protecting your DocumentDB clusters with AWS Backup, add your DocumentDB clusters to your existing backup plans, or create a new backup plan and attach your DocumentDB clusters to it. To learn more about AWS Backup for Amazon DocumentDB, visit the product page, pricing page, and documentation. To get started, visit the AWS Backup console, AWS Command Line Interface (CLI), or AWS SDKs.
AWS Transfer Family AS2 now supports receipts of MDNs asynchronously
AWS Transfer Family now supports receiving Message Disposition Notifications (MDNs) asynchronously for messages sent to trading partners over Applicability Statement 2 (AS2). This enables you to migrate your AS2 workflows to Transfer Family while maintaining interoperability with your trading partners, regardless of their message processing times or network requirements. \n Organizations across healthcare, life sciences, retail, manufacturing, and supply chain sectors depend on Transfer Family for secure AS2-based data exchange with trading partners and regulatory bodies. You can now send AS2 messages while requesting MDNs asynchronously over a separate TLS connection, ensuring compatibility with partner AS2 systems that have extended processing times or high latency. With this launch, Transfer Family supports both synchronous and asynchronous MDN requests, enabling you to migrate AS2 workflows to AWS without impacting your partner integrations.
This capability is available in the majority of AWS regions where AWS Transfer Family is offered. For the full list of supported regions, visit the AWS Capabilities tool in Builder Center. For detailed implementation guidance, see the Transfer Family user guide. To learn more, visit the AWS Transfer Family product page.
Amazon SageMaker HyperPod now supports continuous provisioning for Slurm-orchestrated clusters
Amazon SageMaker HyperPod now extends continuous provisioning support to clusters using the Slurm orchestrator, enabling greater flexibility and efficiency for enterprise customers running large-scale AI/ML training workloads. AI/ML customers running Slurm-based clusters need to start training quickly, scale seamlessly, perform maintenance without disrupting operations, and have granular visibility into cluster operations. Previously, if any instance group could not be fully provisioned, the entire cluster creation or scaling operation failed and rolled back, causing delays and requiring manual intervention.\n With continuous provisioning for Slurm, SageMaker HyperPod automatically provisions remaining capacity in the background while training jobs can begin immediately on available instances. The system uses priority-based provisioning to bring up the Slurm controller node first, followed by login and worker nodes in parallel, so your cluster reaches an operational state as quickly as possible. HyperPod retries failed node launches asynchronously and adds nodes to the Slurm cluster automatically as they become available, ensuring clusters reliably reach their desired scale without requiring manual intervention. You can now perform concurrent, non-blocking scaling operations across multiple instance groups simultaneously — a capacity shortage in one instance group no longer blocks scaling in others. These capabilities help customers reduce time-to-training, maximize resource utilization, and focus on innovation rather than infrastructure management. This feature is available for new SageMaker HyperPod clusters using the Slurm orchestrator. You can enable continuous provisioning by setting the NodeProvisioningMode parameter to “Continuous” when creating new HyperPod clusters using the CreateCluster API. Continuous provisioning can also be enabled when creating new clusters through the AWS CLI and the SageMaker AI console. This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more about continuous provisioning for Slurm clusters, see the Amazon SageMaker HyperPod User Guide.
Amazon Bedrock AgentCore Runtime now offers managed session storage in public preview, enabling agents to persist their filesystem state across stop and resume cycles. Modern agents write code, install packages, generate artifacts, and manage state through the filesystem. Until now, that work was lost when a session stopped. With managed session storage, everything your agent writes to a configured mount path persists automatically, even after the compute environment terminates.\n When you configure session storage, each session gets a persistent directory at the mount path you specify. Your agent reads and writes files as normal, and AgentCore Runtime transparently replicates data to durable storage. When the session stops, data is flushed during graceful shutdown. When you resume with the same session ID, a new microVM mounts the same storage and the agent continues from where it left off — source files, installed packages, build artifacts, and git history all intact. No checkpoint logic, no save and restore code, and no changes to your agent application required. Session storage supports standard Linux filesystem operations including regular files, directories, and symlinks, with up to 1 GB per session and data retained for 14 days of idle time. Storage communication is confined to a single session’s data and cannot access other sessions or AgentCore Runtime environments.
Session storage is available in public preview across fourteen AWS Regions: US (N. Virginia, Ohio, Oregon), Canada (Central), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Paris, Stockholm).
To learn more, see persist files across stop/resume in the Amazon Bedrock AgentCore documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- Self-Service Implementation and Continuous Improvement with Amazon Connect and Amazon Lex: JBR Contact Center Operational Efficiency Initiatives
- Reimagine mainframe applications with agential AI and AWS Transform
- Accelerate Amazon Connect AI agent development with Kiro
AWS News Blog
AWS for Industries
- Build ChatGPT Apps with MCP Servers and AWS Infrastructure
- The Luggage Lab: Accelerate product innovation with AWS generative AI services
Artificial Intelligence
- Unlocking video insights at scale with Amazon Bedrock multimodal models
- Deploy voice agents with Pipecat and Amazon Bedrock AgentCore Runtime – Part 1
- Reinforcement fine-tuning on Amazon Bedrock with OpenAI-Compatible APIs: a technical walkthrough
Networking & Content Delivery
AWS Storage Blog
- Secure SFTP file sharing with AWS Transfer Family, Amazon FSx for NetApp ONTAP, and S3 Access Points
- How Tavily reduced AI search caching costs by 95% with Amazon S3 Express One Zone