5/29/2025, 12:00:00 AM ~ 5/30/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon GameLift Servers SDKs are now on GitHub
The Amazon GameLift Servers team is excited to announce that the Amazon GameLift Server SDKs for C++, C#, and Go are now open source and available on the amazon-gamelift GitHub organization. The game engine plugins and SDKs for Unreal Engine and Unity along with developer scripts have been moved under the same GitHub organization for improved accessibility.\n With this launch, we’ve simplified the integration experience by removing common setup hurdles like the need for external tools like CMake and OpenSSL. Developers can quickly get started integrating the server SDKs with native support for cross-compilation, ARM server builds, and the Unreal Engine toolchain. By open-sourcing the Amazon Game Server SDKs, we want to encourage stronger collaboration with the developer community, offer faster issue resolution, enable direct contribution paths, and provide greater transparency in ongoing development. You can start today by exploring the repositories, raising issues, and contributing to the Amazon GameLift Server SDKs on GitHub. This new capability is available in all Amazon GameLift Servers supported regions globally, except China. To learn more about version updates and full release details, visit the Amazon GameLift Servers Release Notes.
Amazon FSx for Lustre launches the Intelligent-Tiering storage class, which delivers virtually unlimited scalability, the only fully elastic Lustre storage, and the lowest-cost Lustre file storage in the cloud. FSx for Lustre is a fully managed storage service that delivers terabytes per second of throughput, millions of IOPS, and the fastest storage performance for GPU instances in the cloud. The FSx Intelligent-Tiering storage class is optimized for HDD-based or mixed HDD/SSD workloads that have a mix of hot and cold data and don’t require consistent SSD-level performance. For these workloads, the FSx for Lustre Intelligent-Tiering storage class delivers up to 34% better price-performance compared to on-premises HDD file storage and up to 70% better price-performance compared to other cloud-based Lustre storage. \n FSx for Lustre Intelligent-Tiering delivers high performance whether you’re starting with gigabytes of experimental data or managing massive petabyte-scale datasets for your most demanding HPC and AI workloads. The Intelligent-Tiering storage class helps you lower costs by automatically scaling your file storage up or down based on your access patterns. This new storage class eliminates expensive overprovisioning and storage management by only charging for the data you store, with automatic tiering between Frequent Access, Infrequent Access, and Archive tiers. For your latency-sensitive workloads, an optional SSD read cache delivers SSD-level performance at HDD pricing. The FSx for Lustre Intelligent-Tiering storage class is optimized to deliver the lowest cost and simplest storage management for compute-intensive workloads like weather forecasting, seismic imaging, genomic analysis, and ADAS training.
To learn more about the AWS Regions where the FSx Intelligent-Tiering storage class is available, see deployment options for FSx Lustre file systems.
For more information about this new storage class, see the Amazon FSx for Lustre documentation and AWS News Blog.
Amazon EMR now supports read and write operations from Apache Spark jobs on AWS Lake Formation registered tables when the job role has full table access. This capability enables Data Manipulation Language (DML) operations including CREATE, ALTER, DELETE, UPDATE, and MERGE INTO statements on Apache Hive and Iceberg tables from within the same Apache Spark application.\n While Lake Formation’s fine-grained access control (FGAC) offers granular security controls at row, column, and cell levels, many ETL workloads simply need full table access. This new feature enables Apache Spark to directly read and write data when full table access is granted, removing FGAC limitations that previously restricted certain ETL operations. You can now leverage advanced Spark capabilities including RDDs, custom libraries, UDFs, and custom images (AMIs for EMR on EC2, custom images for EMR-Serverless) with Lake Formation tables. Additionally, data teams can run complex, interactive Spark applications through SageMaker Unified Studio in compatibility mode while maintaining Lake Formation’s table-level security boundaries. This feature is available in all AWS Regions where Amazon EMR and AWS Lake Formation are supported. To learn more about this feature, visit the Lake Formation unfiltered access section in EMR Serverless documentation.
Announcing new Model Context Protocol (MCP) Servers for AWS Serverless and Containers
Today, AWS announces the release of Model Context Protocol (MCP) servers for AWS Lambda, Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), and Finch. MCP servers are a standard interface to enhance AI-assisted application development by equipping AI code assistants with real-time, contextual understanding of AWS Serverless and Container services including AWS Lambda, Amazon ECS, and Amazon EKS. With MCP servers, you can get from idea to production faster by giving your AI assistants access to an up-to-date framework on how to correctly interact with your AWS service of choice.\n MCP servers enable AI code assistants to generate production-ready results by incorporating AWS operational best practices, Well-Architected principles, and service-specific optimizations. When building applications on AWS Lambda, Amazon ECS, Amazon EKS, and Finch, developers can use natural language to describe their requirements while AI code assistants handle service configurations, infrastructure setup, and cross-service integrations. The code assistant will use the tools and configurations provided in the MCP server to build and deploy applications. MCP servers also simplify operations by enabling AI-assisted, service-specific configuration of logging, monitoring, security controls, and troubleshooting failures. To learn more about MCP servers for AWS Serverless and Containers and how they can transform your AI-assisted application development, visit the AWS News Blog. To download and try out the open-source MCP servers for these services locally with your AI-enabled IDE of choice, visit the aws-labs GitHub repository.
Amazon S3 Express One Zone now supports granular access controls with S3 Access Points
Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, now supports granular access controls using S3 Access Points. With S3 Access Points you can refine access based on specific prefixes or API actions.\n Now you can create tailored access policies for teams, applications, or individuals accessing data in S3 Express One Zone. Each access point provides a unique hostname, customizable permissions for granular access controls, and the ability to restrict access to a Virtual Private Cloud. S3 Access Points can help with various use cases such as data ingestion with write-only permissions, analytics processing with read-only access, or cross-account data sharing with specific restrictions. S3 Express One Zone support for granular access controls with S3 Access Points is available in all AWS Regions where the storage class is available. You can get started with S3 Access Points using the AWS Management Console, Amazon S3 REST API, AWS Command Line Interface, or the AWS Software Development Kit. To learn more about S3 Access Points, visit the S3 User Guide.
AWS Amplify Hosting announces customizable build instances
AWS Amplify Hosting is excited to offer customizable build instances to provide you with more memory and CPU configurations to build your applications. This new feature allows developers to select from multiple build instances to optimize their build environment based on their application’s specific requirements.\n Developers can now choose from three instance types: (Default) Standard (8 GB Memory, 4 vCPUs) Large (16 GB Memory, 8 vCPU) XLarge (72 GB Memory, 36 vCPU) You can adjust the build instance on any Amplify app in the in the Amplify Console under Hosting→ Build settings. Pricing for these instances can be found on Amplify’s pricing page. This feature is available in all 20 AWS Amplify Hosting regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Osaka) Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Milan), Europe (Ireland), Europe (London), Europe (Paris), Middle East (Bahrain) and South America (São Paulo). To get started, check out our blog post. Or read the documentation.
AWS Security Hub now supports NIST SP 800-171 Revision 2
AWS Security Hub now supports automated security checks that align with the National Institute of Standards and Technology (NIST) Special Publication 800-171 Revision 2 (NIST SP 800-171 Rev. 2). NIST SP 800-171 Rev. 2 is a cybersecurity and compliance framework developed by NIST, an agency that’s part of the U.S. Department of Commerce. This compliance framework provides recommended security requirements for protecting the confidentiality of Controlled Unclassified Information (CUI) in systems and organizations that aren’t part of the U.S. federal government. In Security Hub, the NIST SP 800-171 Rev. 2 standard includes 63 automated controls that perform automated checks against AWS resources to evaluate compliance with NIST SP 800-171 Rev. 2 requirements.\n The new standard is now available in all AWS Regions where Security Hub is currently available, including the AWS GovCloud (US) and the China Regions. To quickly enable the standard across your AWS environment, we recommend that you use Security Hub central configuration. With this approach, you can enable the standard in all or only some of your organization’s accounts and across all AWS Regions that are linked to Security Hub with a single action. To learn more, see NIST SP 800-171 Revision 2 in the AWS Security Hub User Guide. To receive notifications about new Security Hub features and controls, subscribe to the Security Hub SNS topic. You can also try Security Hub at no cost for 30 days with the AWS Free Tier offering.
AWS Cost Explorer now offers new Cost Comparison feature
AWS announces Cost Comparison, a new AWS Cost Explorer capability that helps customers understand cost changes between two months. Cost Comparison automatically detects significant cost changes between two months and surfaces the key factors driving these changes. With this launch, customers can now effortlessly gain insights into their monthly cost changes across their organization and quickly identify key drivers of spending changes.\n Cost Comparison streamlines the time-consuming process of cost analysis by automatically identifying the most substantial cost changes across services, accounts, and Regions. It eliminates the need to switch between different views in Cost Explorer or export data to spreadsheets for manual comparison. The feature provides detailed breakdowns of cost drivers, including usage changes, credits, refunds, and volume discounts impacts. A new Top Trends widget on the AWS Billing and Cost Management console home page shows the top 10 cost variations between the previous two months. For deeper analysis, customers can use the new Compare view within AWS Cost Explorer. This view offers comprehensive cost analysis capabilities, with insights into cost drivers that reveal changes between any two selected months in usage, credits, refunds, and volume discount impacts. Cost Comparison is available at no additional cost in all AWS commercial Regions, excluding AWS China Regions. To get started, customers can visit the AWS Billing and Cost Management console and view the Top Trends widget on the home page, or navigate to Cost Explorer and choose “Compare” in the Report Parameters panel. To learn more, see the AWS Cost Explorer documentation.
Amazon IVS Real-Time Streaming now supports participant replication
Amazon Interactive Video Service (Amazon IVS) Real-Time Streaming now supports participant replication, allowing you to copy a participant from one IVS stage to another. With this capability, participants can appear in multiple stages simultaneously, facilitating cross-stage interactions.\n One common use case in social live streaming applications is competitive modes, where two streamers are temporarily matched so they can interact with each other in real time. With participant replication, you can copy a participant to another stage and allow viewers of each video to see both streamers. Amazon IVS is a managed live streaming solution that is designed to make low-latency or real-time video available to viewers around the world. Video ingest and delivery are available over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available. To learn more, please visit the participant replication documentation page.
Amazon Neptune Database is now available in the Canada West (Calgary) and Asia Pacific (Melbourne) Regions on engine versions 1.4.5.0 and later. You can now create Neptune clusters using R8g, R7g, R7i, R6g, R6i, T4g, and T3 instance types in the Canada West (Calgary) and Asia Pacific (Melbourne) Regions.\n Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production. Amazon Neptune supports Neptune Global Database designed for globally distributed applications, allowing a single Neptune database to span multiple AWS Regions. To get started, you can create a new Neptune cluster using the AWS Management Console, AWS CLI, or a quickstart AWS CloudFormation template. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.
Amazon OpenSearch Service adds support for Script Plugins
Amazon OpenSearch Service now supports Script Plugins that allow you to add new scripting languages or custom scripting functionality to OpenSearch for operations like scoring, sorting, and field value transformations during search or indexing.\n Until now, you could extend Search and Analysis functions of OpenSearch using custom plugins. With this launch, you can implement the ScriptPlugin interface as part of your custom plugin to extend scripting functionality in OpenSearch. You can use the OpenSearch Service console or APIs to upload and associate the custom plugin with your domains. OpenSearch Service validates plugin package for version compatibility, security, and permitted plugin operations. Script Plugins are now supported on all Amazon OpenSearch Service domains running OpenSearch version 2.15 or later, and are available in 14 regions globally: US West (Oregon), US East (Ohio), US East (N. Virginia), South America (Sao Paulo), Europe(Paris), Europe (London), Europe (Ireland), Europe (Frankfurt), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Seoul) and Asia Pacific (Mumbai). To get started with custom plugins, visit our documentation. To learn more about Amazon OpenSearch Service, please visit the product page.
Amazon Managed Service for Prometheus now supports 95 day time range queries
Amazon Managed Service for Prometheus now supports queries with time ranges up to 95 days, an increase from the previous 32-day limit. With this feature, customers can query any 95-day window, including historical data, allowing them to perform month-over-month comparisons, analyze long-term system trends and review historical incidents.\n This feature is now available across all Amazon Managed Service for Prometheus workspaces in all regions where the service is available. To get started, check out the product page and quotas page for more information about limits.
AWS DataSync simplifies and accelerates cross-cloud data transfers
AWS DataSync customers can now transfer data directly between storage in other clouds and Amazon S3, without needing to deploy DataSync agents. This new capability uses DataSync Enhanced mode to deliver increased performance and scalability, helping customers streamline their data pipelines and accelerate migrations from other clouds to AWS.\n AWS DataSync is a secure, reliable, high-speed data transfer service that simplifies moving data over networks. This launch enables direct transfers to and from storage services in other clouds including Google Cloud Storage, Microsoft Azure Blob Storage, and Oracle Cloud Object Storage. Using Enhanced mode, this new capability provides faster transfer speeds through parallel processing of data preparation, transfer, and verification, while supporting virtually unlimited object counts. Customers can monitor transfers using detailed metrics and reporting capabilities, all with a simplified setup that requires no agent deployment. This new capability is available in all AWS Regions where AWS DataSync is offered. To get started, visit the AWS DataSync console. For more information, see the AWS DataSync documentation.
CloudTrail Lake now supports event enrichment and expanded event size
Today, AWS announces two enhancements to CloudTrail Lake: Event enrichment, which makes it easier to categorize, search, and analyze your AWS activity; and expanded event size, which improves visibility into API actions for more comprehensive security analysis. CloudTrail Lake is a managed data lake that enables you to aggregate, immutably store, and analyze your activity logs at scale.\n With event enrichment, you can enrich your CloudTrail management and data events with additional information relevant to your business context. You can append resource tags and select AWS global condition keys to your events, making it easy to categorize, search, and analyze your AWS activity. Using resource tags in your events, you can easily create application-specific activity reports, or view AWS API activity based on the properties of the IAM principal. For example, you can see all delete actions taken by principals with a specific Principal Tag. Event enrichment integrates with CloudTrail Lake’s analytical capabilities, including AI-powered natural language query and summarization (Preview). With expanded event size, you can now expand events size to up to 1 MB, a significant increase from the 256 KB limit. This reduces the need for CloudTrail to truncate events, giving you higher visibility into API actions for a more comprehensive security analysis. To get started, enable event enrichment and expanded event size through the AWS Management Console or AWS APIs on your CloudTrail Lake event data stores. These features are available in AWS commercial regions where CloudTrail Lake is available. To learn more, see CloudTrail documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- NTT Data’s AWS Japan-Generated AI Practical Application Promotion Program Results Report: Creative Business Support Solution Development Using AI Agents
- Amazon Q Developer CLI supports image input in the terminal
- AWS Weekly Roundup: Claude 4, EKS Dashboards, Community Events, and More at Amazon Bedrock (May 26, 2025)
AWS News Blog
- Amazon FSx for Lustre launches new storage class with the lowest-cost and only fully elastic Lustre file storage
- Enhance AI-assisted development with Amazon ECS, Amazon EKS and AWS Serverless MCP server
AWS Architecture Blog
AWS Big Data Blog
AWS Compute Blog
- Introducing AWS Serverless MCP Server: AI-powered development for modern applications
- Modernizing applications with AWS AppSync Events
Containers
- Introducing AI on EKS: powering scalable AI workloads with Amazon EKS
- Automating AI-assisted container deployments with the Amazon ECS MCP Server
- Optimizing data lakes with Amazon S3 Tables and Apache Spark on Amazon EKS
- Accelerating application development with the Amazon EKS MCP server
Front-End Web & Mobile
AWS for Industries
- On-device SLMs with agentic orchestration for hyper-personalized customer experiences in telecom
- Using Amazon Q in healthcare organizations
- How Mastercard Achieved Near-Zero Downtime Deployments for Fraud Detection
- Learn to transform your health systems at the DC Summit for AWS
AWS Machine Learning Blog
- Revolutionizing earth observation with geospatial foundation models on AWS
- Create an agentic RAG application for advanced knowledge discovery with LlamaIndex, and Mistral in Amazon Bedrock
- Text-to-image basics with Amazon Nova Canvas
- Real-world applications of Amazon Nova Canvas for interior design and product photography
AWS for M&E Blog
Networking & Content Delivery
AWS Storage Blog
- University of California Irvine backs up petabytes of research data to AWS
- How to consume tabular data from Amazon S3 Tables for insights and business reporting
- Automating paper-to-electronic healthcare claims processing with AWS