3/11/2025, 12:00:00 AM ~ 3/12/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon Nova Creative Models now available in Europe
Today, Amazon announces the expansion of Amazon Nova creative models, including Amazon Nova Canvas and Amazon Nova Reel, to Europe (Dublin). These models are designed to generate high-quality images and videos from text and image inputs, providing customizable visual content for various applications. This expansion addresses the growing demand for automated, high-quality visual content generation, benefiting marketers, content creators, and developers who need efficient solutions for producing engaging media.\n Amazon Nova creative models offer built-in controls to enable the safe and responsible use of AI, including watermarking for traceability, content moderation, and indemnification. Customers can now leverage these advanced capabilities to create compelling visuals that enhance their digital presence and user engagement. To learn more about Amazon Nova creative models, see the Amazon Nova creative models and learn about Amazon Nova creative models responsible use of AI. To get started with Amazon Nova on Amazon Bedrock, visit the Amazon Bedrock console.
Amazon EC2 Allowed AMIs now integrates with AWS Config
Allowed AMIs, an account-wide Amazon EC2 setting that enables you to limit the discovery and use of Amazon Machine Images (AMIs) within your AWS accounts, now integrates with AWS Config. You can now use AWS Config rules to automatically monitor, detect, and report instances launched using AMIs that have not been allowed by Allowed AMIs.\n Prior to today, you had to create custom scripts to monitor instance launches and assess the impact of enabling Allowed AMIs. Now with the integration of Allowed AMIs with AWS Config, you can track and detect non-compliant instances using the new AWS Config rule. By leveraging this rule in conjunction with the audit-mode functionality of Allowed AMIs, you can gain valuable insights into your instance launch patterns and identify any potential issues before enforcing stricter controls. This rule scans existing instances and monitors new instance launches, flagging instances launched with unapproved AMIs. This capability enables you to proactively identify and remediate violations before enabling Allowed AMIs in your accounts, simplifying governance across your AWS environment. By default, this rule is disabled for all AWS accounts. You can enable it by using the AWS CLI, SDKs, or Console. To learn more, please visit our documentation.
Amazon EventBridge expands IAM execution role support to all targets
Amazon EventBridge expands execution role support to AWS Lambda, Amazon SNS, and Amazon SQS event bus targets, making this feature available for all target types. We recommend configuring execution roles for all your EventBridge targets to benefit from consistent permissions policies and dedicated invocation throttle limits.\n Amazon EventBridge Event Bus is a serverless event broker that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. An execution role is an AWS Identity and Access Management (IAM) role that EventBridge assumes when invoking a target, giving you fine-grained control over which AWS services and resources EventBridge can access. The expansion to Lambda, SNS, and SQS targets allows consistent permissions across all EventBridge targets, enables setting permissions for multiple targets within a single IAM policy, and can help manage throughput by using your account-specific limits. This feature is available in all AWS Regions, including AWS GovCloud (US). To learn more, please visit our documentation or get started in the AWS Management Console.
Accelerate serverless development with ready-to-use Serverless Land Patterns in Visual Studio Code
AWS makes it easier for developers to build serverless applications by bringing Serverless Land’s extensive application pattern library directly into the Visual Studio Code (VS Code) IDE. This integration eliminates the need to switch between your development environment and external resources when building serverless architectures by enabling you to browse, search, and implement pre-built serverless patterns directly in VS Code IDE. This new feature simplifies and accelerates the process of building serverless applications using VS Code IDE.\n Serverless Land provides hundreds of curated serverless application patterns covering popular use cases across AWS services like AWS Lambda, Amazon Simple Queue Service (SQS), Amazon API Gateway, AWS Step Functions, Amazon EventBridge, and many more. With Serverless Land integration in VS Code IDE, you can now use familiar VS Code interface to search and filter application patterns based on AWS services, Infrastructure as Code (IaC) frameworks, and language runtime requirements. When you find a pattern that matches your use case, you can preview the implementation details and download the pattern code directly to your workspace using the Quick Pick functionality of VS Code. This integration ensures that you now have easy access to reliable serverless application patterns which are regularly updated and align with AWS best practices, enhancing your serverless development experience. The Serverless Land patterns are now available to all developers with the AWS Toolkit (v3.48.0 or later) installed on their VS Code IDE. To learn more about this experience and how to get started, visit the AWS Toolkit developer guide. To learn more about Serverless Land patterns, visit ServerlessLand.com.
Amazon EC2 R7i instances are now available in an additional AWS region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7i instances are available in Asia Pacific (Osaka) Region.\n Amazon EC2 R7i instances are powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids), available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. R7i instances deliver up to 15% better price-performance versus R6i instances. These instances are SAP certified and are a great choice for memory-intensive workloads, such as SAP, SQL and NoSQL databases, distributed web scale in-memory caches, in-memory databases like SAP HANA, and real time big data analytics like Hadoop and Spark. They offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl) for high-transaction and latency-sensitive workloads. These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology, allowing customers to facilitate efficient offload and acceleration of data operations, and optimize performance for workloads. R7i instances support the new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. In addition, customers can now attach up to 128 EBS volumes to an R7i instance (vs 28 EBS volume attachments on R6i). This allows processing of larger amounts of data, scale workloads, and improve performance over R6i instances. To learn more, visit Amazon EC2 R7i Instances.
Amazon Neptune Database now supports R7i instances
Amazon Neptune Database now supports R7i database instances powered by custom 4th Generation Intel Xeon Scalable processors. R7i instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. These instances are now available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), and Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), and engine versions 1.4.3 or above.\n Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easier to build and run applications that work with highly connected datasets. Compared to previous generation R6i instances, the R7i instances deliver up to 15% better price performance, powering your graph use cases such as fraud graphs, knowledge graphs, customer 360 graphs, and security graphs. You can launch R7i instances for Neptune using the AWS Management Console or using the AWS CLI. Upgrading a Neptune cluster to R7i instances requires a simple instance type modification for Neptune engine versions 1.4.3 or higher. For more information on pricing and regional availability, refer to the Amazon Neptune pricing page.
Amazon Bedrock Flows and Prompt Management now available in AWS GovCloud (US) and more regions
Amazon Bedrock Flows and Amazon Bedrock Prompt Management are now available in AWS GovCloud (US) and Europe (Stockholm) regions. Flows helps you accelerate the creation, testing, and deployment of predefined generative AI workflows. You can use the visual builder or SDK to connect the latest foundation models, prompts, agents, knowledge bases, and other AWS services to create and test generative AI workflows. You can easily experiment with Flows using the visual builder or APIs, A/B test multiple flow versions, and deploy and scale to production using serverless infrastructure.\n Prompt Management helps you simplify the creation, evaluation, versioning, and sharing of prompts to get the best responses from foundation models for your use cases. You can use the Prompt Builder to experiment with multiple foundation models, model configurations, and prompt messages. You can test and compare prompts in-place using the Prompt Builder without deployment. To share prompts for use in downstream applications, you can create a version and make an API call to retrieve the prompt. Both Bedrock Flows and Prompt Management are now available in AWS GovCloud (US) and Europe (Stockholm) regions, in addition to existing commercial regions. To get started, see the following resources:
Blog post
Amazon Bedrock user guide for Flows
Amazon Bedrock user guide for Prompt Management
Amazon GameLift Servers launches Game Server Wrapper for rapid onboarding
We’re excited to introduce the Amazon GameLift Servers Game Server Wrapper, an open-source tool that helps significantly reduce the time required for game developers to onboard their game servers.\n Developers told us they faced significant overhead integrating the Server SDK, including dependency management and custom code implementation. The Game Server Wrapper solves these challenges by eliminating the need for server SDK integration, making it easy to deploy game servers on Amazon GameLift Servers with zero code changes. The Game Server Wrapper supports game session management through built in default functions to start and stop game sessions, making it easy to test and iterate on game builds. Developers package their game server executable with the wrapper, create a Amazon GameLift Servers Build resource, upload the build to Amazon GameLift Servers, and start game sessions without modifying their game server code. While the wrapper simplifies onboarding, it does not support all Amazon GameLift Servers SDK functions. This includes all matchmaking and backfilling APIs for Amazon GameLift Server FlexMatch and capabilities for player session state management, which are not supported The Amazon GameLift Servers Game Server Wrapper is best suited for developers evaluating Amazon GameLift Servers with minimal setup, or for production use cases requiring basic game session management. Please check out the Amazon GameLift Servers Game Server Wrapper code repository and the technical documentation to accelerate your Amazon GameLift Servers onboarding experience.
AWS Blogs
AWS Japan Blog (Japanese)
- Strategies for Achieving Least Authority at Scale — Part 2
- Strategies for Achieving Least Authority at Scale — Part 1
- [Event Report & Material Release] AWS Re:Invent 2024 Recap for Telecom Industry — Industry Edition
AWS Architecture Blog
AWS Big Data Blog
AWS DevOps & Developer Productivity Blog
- Announcing the end of support for Node.js 14.x and 16.x in AWS CDK
- Take control of your code with Amazon Q Developer’s new context features
AWS for Industries
AWS Machine Learning Blog
- Benchmarking Amazon Nova and GPT-4o models with FloTorch
- Deploy DeepSeek-R1 distilled models on Amazon SageMaker using a Large Model Inference container
- From fridge to table: Use Amazon Rekognition and Amazon Bedrock to generate recipes and combat food waste