3/18/2026, 12:00:00 AM ~ 3/19/2026, 12:00:00 AM (UTC)

Recent Announcements

Minimax M2.5 and GLM 5 models now available on Amazon Bedrock

Amazon Bedrock expands model selection for customers by adding support for GLM 5 and Minimax M2.5. GLM 5 is a frontier‑class, general‑purpose large language model optimized for complex systems engineering and long‑horizon agentic tasks. It builds on the GLM 4.5 agent‑centric lineage and is designed to support multi‑step reasoning, math (including AIME‑style benchmarks), advanced coding, and tool‑augmented workflows, with long context support suitable for sophisticated agents and enterprise applications. MiniMax M2.5 is an agent‑native frontier model trained explicitly to reason efficiently, decompose tasks optimally, and complete complex workflows under real‑world time and cost constraints. It achieves task completion speeds comparable to or faster than leading proprietary frontier models by combining high inference throughput with reinforcement learning focused on token‑efficient reasoning and better decision‑making in agentic scaffolds.\n MiniMax M2.5 and GLM 5 are now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation.

Amazon EC2 High Memory U7i-6TB instances now available in Asia Pacific (Malaysia)

Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in AWS Asia Pacific (Malaysia). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.\n U7i-6tb instances deliver 448 vCPUs with up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.

To learn more about U7i instances, visit the High Memory instances page.

Amazon ECR now supports pull through cache for Chainguard

Amazon Elastic Container Registry (Amazon ECR) pull through cache now supports Chainguard’s registry as an upstream source. With today’s release, customers now benefit from the security and availability of Amazon ECR for private Chainguard images.\n As customers continue to scale their use of Chainguard images, keeping them synchronized with Chainguard’s registry becomes increasingly important. With ECR’s pull through cache feature, customers can keep Chainguard images in sync without additional workflows or tools to manage. Amazon ECR’s pull through cache supports frequent registry syncs, helping to keep container images sourced from Chainguard up to date. Later, customers can apply ECR features such as image scanning and lifecycle policies to their cached Chainguard images. The pull through cache for Chainguard is available in all AWS Regions where Amazon ECR pull through cache is supported. To get started, review our documentation.

NVIDIA Nemotron 3 Super now available on Amazon Bedrock

Amazon Bedrock now supports NVIDIA Nemotron 3 Super, an open hybrid Mixture-of-Experts (MoE) model designed for complex multi-agent applications. Built for agentic workloads, Nemotron 3 Super delivers fast, and cost-efficient inference enabling AI agents to maintain focus and accuracy across long, multi-step tasks without losing context. Fully open with weights, datasets, and recipes, the model supports easy customization and secure deployment, making it well-suited for enterprises, startups, and individual developers building multi-agent workflows, and advanced reasoning applications.\n Amazon Bedrock gives customers access to Nemotron 3 Super through a single, fully managed API — with no infrastructure to provision or models to host. Bedrock’s serverless inference, built-in security controls, and compatibility with OpenAI API specifications make it easy to integrate Nemotron 3 Super into existing workflows and deploy at production scale with confidence.

NVIDIA Nemotron 3 Super is now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation. To learn more and get started, visit the Amazon Bedrock console or the service documentation here. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit documentation here.

Amazon S3 Access Grants are now available in the AWS Asia Pacific (New Zealand) Region

You can now create Amazon S3 Access Grants in the AWS Asia Pacific (New Zealand) Region.\n Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity.

Visit the AWS Region Table for complete regional availability information. To learn more about Amazon S3 Access Grants, visit our product page.

Amazon EC2 M6in and M6idn instances are now available in Europe (London) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6in and M6idn instances are available in AWS London Region. These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances. Customers can use M6in and M6idn instances to scale their performance and throughput of network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function.\n M6in and M6idn instances are available in 10 different instance sizes including metal, offering up to 128 vCPUs and 512 GiB of memory. They deliver up to 100Gbps of Amazon Elastic Block Store (EBS) bandwidth, and up to 400K IOPS. M6in and M6idn instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. M6idn instances offer up to 7.6 TB of high-speed, low-latency instance storage.

With this regional expansion, M6in and M6idn instances are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Ireland, Frankfurt, Spain, Stockholm, Zurich, London), Asia Pacific (Mumbai, Singapore, Tokyo, Sydney, Seoul), Canada (Central), and AWS GovCloud (US-West). Customers can purchase the new instances through Savings Plans, On-Demand, and Spot instances. To learn more, see M6in and M6idn instances page.

Amazon EC2 C8a instances now available in the Asia Pacific (Tokyo) region

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Asia Pacific (Tokyo) region. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances.\n C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.

Amazon Inspector expands agentless EC2 scanning and introduces Windows KB-based findings

Amazon Inspector now offers expanded agentless EC2 scanning with enhanced detection coverage, including new support for Windows operating system vulnerability scanning without requiring an agent. Security teams and IT administrators can now detect vulnerabilities across a broader range of software and applications on their EC2 instances — including WordPress, Apache HTTP Server, Python packages, and Ruby gems — as well as Windows OS vulnerabilities, all through agentless scanning. Customers automatically receive findings for newly supported software and applications with no configuration changes required.\n Amazon Inspector is also introducing Windows Knowledge Base (KB)-based findings for Windows OS vulnerabilities. Rather than receiving a separate finding for each CVE addressed by a single Microsoft patch, customers now receive a single consolidated KB finding that groups all related CVEs together. Each KB finding surfaces the highest CVSS score, EPSS score, and exploit availability from its constituent CVEs, and includes a direct link to the relevant Microsoft KB article — making it straightforward to understand exactly which patch to apply and why. All existing CVE-based Windows OS findings will automatically transition to KB-based findings. All existing CVE-based Windows OS findings will automatically transition to KB-based findings, and customers do not need to take any additional action.

Both capabilities are available in all AWS Regions where Amazon Inspector is available. To learn more, visit the Amazon Inspector product page and the Amazon Inspector documentation.

AWS Config launches 75 new managed rules

AWS Config announces the launch of an additional 75 managed Config rules for various use cases such as security, durability, and operations. You can now search, discover, enable and manage these additional rules directly from AWS Config and govern more use cases for your AWS environment.\n With this launch, you can now enable these controls across your account or across your organization. For example, you can assess your security posture across AWS Amplify, Amazon SageMaker, Amazon Route 53, and more. Additionally, you can leverage Conformance Packs to group these new controls and deploy across an account or across organization, streamlining your multi-account governance. For the full list of recently released rules, visit the AWS Config developer guide. For description of each rule and the AWS Regions in which it is available, please refer our Config managed rules documentation. To start using Config rules, please refer our documentation. New Rules Launched:

ACM_CERTIFICATE_TRANSPARENT_LOGGING_ENABLED

AMPLIFY_APP_BUILD_SPEC_CONFIGURED

AMPLIFY_APP_PLATFORM_CHECK

AMPLIFY_BRANCH_AUTO_BUILD_ENABLED

AMPLIFY_BRANCH_BUILD_SPEC_CONFIGURED

AMPLIFY_BRANCH_FRAMEWORK_CONFIGURED

AMPLIFY_BRANCH_PULL_REQUEST_PREVIEW_ENABLED

APIGATEWAY_DOMAIN_NAME_TLS_CHECK

APIGATEWAYV2_INTEGRATION_PRIVATE_HTTPS_ENABLED

APPINTEGRATIONS_APPLICATION_APPROVED_ORIGINS_CHECK

APPINTEGRATIONS_APPLICATION_TAGGED

APPMESH_MESH_IP_PREF_CHECK

APPMESH_VIRTUAL_GATEWAY_LISTENERS_HEALTH_CHECK_ENABLED

APPMESH_VIRTUAL_NODE_LISTENERS_HEALTH_CHECK_ENABLED

APPMESH_VIRTUAL_NODE_LISTENERS_OUTLIER_DETECT_ENABLED

APPMESH_VIRTUAL_NODE_SERVICE_BACKENDS_TLS_ENFORCED

CLOUDTRAIL_EVENT_DATA_STORE_MULTI_REGION

CLOUDWATCH_ALARM_DESCRIPTION

CODEARTIFACT_REPOSITORY_TAGGED

CODEBUILD_PROJECT_TAGGED

EC2_IPAMSCOPE_TAGGED

EC2_LAUNCHTEMPLATE_EBS_ENCRYPTED

ECS_SERVICE_PROPAGATE_TAGS_ENABLED

ELBV2_TARGETGROUP_HEALTHCHECK_PROTOCOL_ENCRYPTED

ELBV2_TARGETGROUP_PROTOCOL_ENCRYPTED

EVENTSCHEMAS_DISCOVERER_TAGGED

EVENTSCHEMAS_REGISTRY_TAGGED

GROUNDSTATION_CONFIG_TAGGED

GROUNDSTATION_DATAFLOWENDPOINTGROUP_TAGGED

GROUNDSTATION_MISSIONPROFILE_TAGGED

HEALTHLAKE_FHIRDATASTORE_TAGGED

IAM_OIDC_PROVIDER_CLIENT_ID_LIST_CHECK

IAM_POLICY_DESCRIPTION

IMAGEBUILDER_DISTRIBUTIONCONFIGURATION_TAGGED

IMAGEBUILDER_IMAGEPIPELINE_TAGGED

IMAGEBUILDER_IMAGERECIPE_EBS_VOLUMES_ENCRYPTED

IMAGEBUILDER_IMAGERECIPE_TAGGED

IMAGEBUILDER_INFRASTRUCTURECONFIGURATION_TAGGED

KINESISVIDEO_SIGNALINGCHANNEL_TAGGED

KINESISVIDEO_STREAM_TAGGED

LAMBDA_FUNCTION_APPLICATION_LOG_LEVEL_CHECK

LAMBDA_FUNCTION_LOG_FORMAT_JSON

LAMBDA_FUNCTION_SYSTEM_LOG_LEVEL_CHECK

LIGHTSAIL_BUCKET_OBJECT_VERSIONING_ENABLED

MEDIAPACKAGE_PACKAGINGCONFIGURATION_TAGGED

MEDIATAILOR_PLAYBACKCONFIGURATION_TAGGED

MEMORYDB_SUBNETGROUP_TAGGED

NEPTUNE_CLUSTER_SNAPSHOT_IAM_DATABASE_AUTH_ENABLED

OPENSEARCHSERVERLESS_COLLECTION_DESCRIPTION

OPENSEARCHSERVERLESS_COLLECTION_STANDBYREPLICAS_ENABLED

PANORAMA_PACKAGE_TAGGED

RDS_CLUSTER_BACKUP_RETENTION_CHECK

RDS_GLOBAL_CLUSTER_AURORA_MYSQL_SUPPORTED_VERSION

RESILIENCEHUB_APP_TAGGED

RESILIENCEHUB_RESILIENCYPOLICY_TAGGED

ROUTE53_RECOVERY_CONTROL_CLUSTER_TAGGED

ROUTE53_RECOVERY_READINESS_CELL_TAGGED

ROUTE53_RECOVERY_READINESS_READINESS_CHECK_TAGGED

ROUTE53_RECOVERY_READINESS_RECOVERY_GROUP_TAGGED

ROUTE53_RECOVERY_READINESS_RESOURCE_SET_TAGGED

ROUTE53_RESOLVER_RESOLVER_ENDPOINT_TAGGED

S3_DIRECTORY_BUCKET_LIFECYCLE_POLICY_RULE_CHECK

SAGEMAKER_DATA_QUALITY_JOB_ENCRYPT_IN_TRANSIT

SAGEMAKER_DATA_QUALITY_JOB_ISOLATION

SAGEMAKER_FEATUREGROUP_DESCRIPTION

SAGEMAKER_INFERENCEEXPERIMENT_TAGGED

SAGEMAKER_MODEL_BIAS_JOB_ENCRYPT_IN_TRANSIT

SAGEMAKER_MODEL_BIAS_JOB_ISOLATION

SAGEMAKER_MODEL_EXPLAINABILITY_JOB_ENCRYPT_IN_TRANSIT

SAGEMAKER_MODEL_QUALITY_JOB_ENCRYPT_TRANSIT

SAGEMAKER_MONITORING_SCHEDULE_ISOLATION

SIGNER_SIGNINGPROFILE_TAGGED

TRANSFER_CONNECTOR_AS2_ENCRYPTION_ALGORITHM_CHECK

TRANSFER_CONNECTOR_AS2_MDN_SIGNING_ALGORITHM_CHECK

TRANSFER_CONNECTOR_AS2_SIGNING_ALGORITHM_CHECK

Amazon Redshift increases performance for new queries in dashboards and ETL workloads by up to 7x

Amazon Redshift improves the performance of BI dashboards and ETL workloads by speeding up new queries by up to 7x. This significantly improves the response times of low-latency SQL queries, such as those used in near real-time analytics applications, BI dashboards, ETL pipelines, and autonomous, goal-seeking AI agents. Customers experience substantially faster query response times as Redshift accelerates the process of preparing the SQL query for execution. Queries start faster and return results quicker. This improvement is automatically enabled at no additional cost.\n To deliver this major improvement, Redshift added a new optimization to query compilation where new queries are processed immediately using composition. Composition is a technique that generates a lightweight arrangement of pre-existing logic while simultaneously creating highly optimized, query-specific code that is compiled and executed across available compute resources to further boost performance. Composition removes compilation from the critical path of query execution and provides immediate execution while compilation proceeds in the background. With this optimization, new queries processed by Redshift start faster and deliver performance consistent with subsequent runs. This optimization is enabled by default for any SQL query across all provisioned clusters and serverless workgroups, in all commercial AWS Regions where Amazon Redshift operates. It is available on the Redshift current track with other tracks following in upcoming patch releases. No action is required from customers to benefit from this enhancement, and it is free of charge.

Amazon SageMaker Unified Studio adds custom metadata filters

Amazon SageMaker Unified Studio adds custom metadata search filters, enabling customers to narrow catalog search results using organization-specific attributes. This helps customers find the right assets faster by filtering on fields like business region, data classification, or study name, in addition to existing keyword and semantic search.\n With custom metadata search filters, customers can add filters based on any custom metadata fields available in their catalog, such as sample type or study ID. Filters support string fields with a “contains” operator and numeric fields (Integer, Long) with equals, greater than, and less than operators. Customers can also filter by asset name, description, and date range. Multiple filters can be combined, and filter selections persist across browser sessions. Custom metadata search filters are available in all AWS Regions where Amazon SageMaker Unified Studio is supported. Standard Amazon SageMaker pricing applies. To get started, navigate to the Browse Assets page in Amazon SageMaker Unified Studio and use the “+ Add Filter” button to create custom filters. You can also use the SearchListings API with metadata form attributes in the filters parameter. For more information, see the Amazon SageMaker Unified Studio documentation.

Amazon OpenSearch Service now supports OpenSearch version 3.5

You can now run OpenSearch version 3.5 on Amazon OpenSearch Service. OpenSearch 3.5 introduces significant improvements in agentic AI capabilities, search relevance tooling, and observability features to help you build powerful agentic applications.\n With this launch, agentic conversation memory captures conversation context and tool reasoning in persistent storage, enabling your agents to provide coherent, accurate responses across multi-turn conversations. In addition to this, context management optimizes what you send to large language models (LLMs) through automatic truncation and summarization, reducing your token costs while maintaining response quality. Finally a redesigned no-code agent interface supports Model Context Protocol (MCP) integration, search templates, conversational memory, and single model configurations, allowing you to build sophisticated agents without writing code.

You can now tune search quality faster with expanded search relevance workbench capabilities. LLM-powered evaluation automatically assesses search results with customizable prompts, letting you scale relevance testing beyond manual judgments and accelerate quality improvements. Scheduled experiments run tests nightly, weekly, or monthly, helping you track search quality trends over time and catch regressions early. Enhanced single query comparison displays agentic search queries alongside agent summaries, making it easier to validate and optimize agent-driven search experiences.

For information on upgrading to OpenSearch 3.5, please see the documentation. OpenSearch 3.5 is now available in all AWS Regions where Amazon OpenSearch Service is available.

Amazon Connect expands agentic speech-to-speech voice experiences to the London (Europe) region and adds three new voices

Amazon Connect now offers agentic speech-to-speech voice experiences in an additional AWS Region: Europe (London). Amazon Connect also adds three new speech-to-speech voices across US Spanish and UK English: Pedro (es-US), Amy (en-GB), and Brian (en-GB).\n Amazon Connect’s agentic self-service capabilities enable AI agents to understand, reason, and take action across voice and messaging channels to automate routine and complex customer service tasks. Connect’s agentic speech-to-speech voice AI agents understand not only what customers say but how they say it, adapting voice responses to match customer tone and sentiment while maintaining natural conversational pace. With these updates, you can deliver agentic speech-to-speech voice experiences to customers across a new region with a wider selection of voices.

To learn more about this feature, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, a complete AI-powered contact center solution delivering personalized customer experiences at scale, visit the Amazon Connect website.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Architecture Blog

AWS Big Data Blog

Containers

AWS Database Blog

AWS for Industries

Artificial Intelligence

AWS Security Blog

Open Source Project

AWS CLI