1/29/2026, 12:00:00 AM ~ 1/30/2026, 12:00:00 AM (UTC)
Recent Announcements
Announcing increased 1 MB payload size support in Amazon EventBridge
Amazon EventBridge increases event payload size from 256 KB to 1 MB, enabling developers to ingest richer, complex payloads for their event-driven workloads without the need to split, compress, or externalize data.\n Amazon EventBridge is a serverless event router that enables you to create scalable event-driven applications by routing events between your applications, third-party SaaS applications, and AWS services. These applications often need to process rich contextual data, including large-language model prompts, telemetry signals, and complex JSON structures for machine learning outputs. The new 1MB payload support in EventBridge Event Buses enables developers to streamline their architectures by including comprehensive data in a single event, reducing the need for complex data chunking or external storage solutions. This feature is available in all commercial AWS Regions where Amazon EventBridge is offered, except Asia Pacific (New Zealand), Asia Pacific (Thailand), Asia Pacific (Malaysia), Asia Pacific (Taipei), and Mexico (Central). For a full list, see the AWS Regional Services List. To learn more, visit the EventBridge documentation.
Amazon Bedrock now supports server-side custom tools using the Responses API
Amazon Bedrock now supports server-side tools in the Responses API using OpenAI API-compatible service endpoints. Bedrock already supports client-side tool use with the Converse, Chat Completions, and Responses APIs. Now, with the launch of server-side tool use for Responses API, Amazon Bedrock calls the tools directly without going through a client, enabling your AI applications to perform real-time, multi-step actions such as searching the web, executing code, and updating databases within the organizational, governance, compliance, and security boundaries of your AWS accounts. You can either submit your own custom Lambda function to run custom tools or use AWS-provided tools, such as notes and tasks.\n Server-side tools using the Responses API is available starting today with OpenAI’s GPT OSS 20B/120B models in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Mumbai), South America (São Paulo), Europe (Ireland), Europe (London), and Europe (Milan) AWS Regions. Support for other regions and models is coming soon.
To get started, visit the service documentation.
Amazon Cognito introduces inbound federation Lambda triggers
Amazon Cognito introduces inbound federation Lambda triggers that enable you to transform and customize federated user attributes during the authentication process. You can now modify responses from external SAML and OIDC providers before they are stored in your user pool, providing complete programmatic control over the federation flow without requiring changes to your identity provider configuration..\n Inbound federation Lambda trigger addresses current limitations in federated authentication workflows, particularly issues caused by attribute size limits and the need for selective attribute storage from external identity providers. For example, large group attributes from external SAML or OIDC identity providers that exceed Cognito’s 2,048 character limit per attribute can block the authentication flow. This capability allows you to add, override, or suppress attribute values, such as modifying large group attributes, before creating new federated users or updating existing federated user profiles in Cognito.
The new inbound federation Lambda trigger is available through hosted UI (classic) and managed login in all AWS Regions where Amazon Cognito is available. To get started, configure the trigger using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), Cloud Development Kit (CDK), or AWS CloudFormation by adding the new parameter to your User Pool LambdaConfig. To learn more, see the Amazon Cognito Developer Guide for implementation examples and best practices.
Amazon Keyspaces (for Apache Cassandra) introduces pre-warming with WarmThroughput for your tables
Amazon Keyspaces (for Apache Cassandra) now supports table pre-warming, allowing you to proactively prepare both new and existing tables to meet future traffic demands. This capability is available for tables in both provisioned and on-demand capacity modes, including multi-Region replicated tables.\n Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra–compatible database service. Amazon Keyspaces is serverless, so you pay for only the resources that you use and you can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. While Amazon Keyspaces automatically scales to accommodate growing workloads, certain scenarios like application launches, marketing campaigns, or seasonal events can create sudden traffic spikes that exceed normal scaling patterns. With pre-warming, you can now manually specify your expected peak throughput requirements during table creation or update operations, ensuring your tables are immediately ready to handle large traffic surges without scaling delays or increased error rates. The pre-warming process is non-disruptive and runs asynchronously, allowing you to continue making other table modifications while pre-warming is in progress. Pre-warming incurs a one-time charge based on the difference between your specified values and the baseline capacity. The feature is now available in all AWS Commercial and AWS GovCloud (US) Regions where Amazon Keyspaces is offered. To learn more, visit the pre-warming launch blog or Amazon Keyspaces documentation.
AWS announces Deployment Agent SOPs in AWS MCP Server (preview)
AWS announces the launch of deployment Standard Operating Procedures (SOPs) available in the AWS MCP Server. SOPs are structured, natural language instructions that guide AI agents through complex, multi-step tasks to ensure consistent, reliable, and efficient behavior. With these automated procedures, customers can deploy web applications to their AWS account using natural language prompts from any MCP-compatible IDE or CLI, including Kiro, Kiro CLI, Cursor, and Claude Code. Deployment works by generating AWS CDK infrastructure, deploying CloudFormation stacks, and creating CI/CD pipelines with recommended AWS security best practices.\n Previously, developers struggled to take their vibe-coded applications to production with DevOps best practices in place. Now, developers can move quickly from prototype to production in as little as one prompt. When you ask your AI assistant configured with AWS MCP Server to deploy your web application, your AI agent will follow the multi-step plan defined in Agent SOPs to analyze the project structure, generate CDK infrastructure, and deploy a preview environment hosted on Amazon S3 and Amazon CloudFront. Once you are ready, it can configure AWS CodePipeline for automated production deployments from source repositories, setting up CI/CD automatically for your application. The Agent SOPs support web applications built with popular frameworks including React, Vue.js, Angular, and Next.js. Deployment documentation is automatically created in the repository, enabling agents to handle future deployments, query logs for troubleshooting and resume work across sessions. The Agent SOPs are available in preview as part of the AWS MCP Server at no additional cost in the US East (N. Virginia) Region. You pay only for AWS resources you create and applicable data transfer costs. To get started, see the AWS MCP Server documentation.
Amazon GameLift Servers now supports automatic scaling to and from zero instances
Amazon GameLift Servers now enables automatic scaling to and from zero instances, addressing a critical cost optimization challenge for game developers. Previously, developers had to maintain running instances even during periods of low or no activity in order for Fleet autoscaling to remain active. This resulted in unnecessary infrastructure costs during off-peak hours. With automatic scaling to and from zero instances, game developers using Amazon GameLift Servers can optimize their multiplayer gaming infrastructure costs while maintaining responsive performance.\n By eliminating charges for unused instances during inactive periods, while automatically scaling up when game sessions are requested, this new capability delivers significant cost savings for game developers. This is particularly valuable for games with distinct peak and off-peak periods, seasonal or event-based games, new game launches with uncertain traffic patterns, and regional games with time-zone specific activity. Additionally, scaling decisions no longer need manual intervention, as Amazon GameLift Servers intelligently adapts to natural gaming activity patterns. The automatic scaling to zero instances capability is available in all Amazon GameLift Servers supported regions. To learn more about Amazon GameLift Servers automatic scaling capabilities and implementation details, visit the Amazon GameLift Servers documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- Achieve fine-grained resource control with Amazon Redshift Serverless queue-based QMR
- Up to 90% reduction in test time — Amazon Connect test and simulation capabilities
- Behind Amazon Connect: Evolving as an Innovator
AWS Architecture Blog
AWS Big Data Blog
- Build a trusted foundation for data and AI using Alation and Amazon SageMaker Unified Studio
- Reduce EMR HBase upgrade downtime with the EMR read-replica prewarm feature