4/8/2025, 12:00:00 AM ~ 4/9/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon OpenSearch Ingestion now available in AWS Europe (Spain) Region
Starting today, customers can use Amazon OpenSearch Ingestion in Europe(Spain) region for ingesting data into their Amazon OpenSearch Service managed clusters or serverless collections.\n Amazon OpenSearch Ingestion is a fully managed data ingestion tier that allows you to ingest and process data before indexing it in Amazon OpenSearch managed clusters or serverless collections. Amazon OpenSearch Ingestion provides a no-code experience to filter, transform, redact, and route data into Amazon OpenSearch Service. Amazon OpenSearch Ingestion automatically provisions and scales the underlying resources to meet the fluctuating demands of your workloads. With this launch, Amazon OpenSearch Ingestion is now generally available in 16 AWS regions: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe(Spain), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm). To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Ingestion Developer Guide.
Amazon EC2 C6in instances are now available in AWS Asia Pacific (Osaka) Region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6in instances are available in AWS Region Asia Pacific (Osaka). These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances.\n Customers can use C6in instances to scale the performance of applications such as network virtual appliances (firewalls, virtual routers, load balancers), Telco 5G User Plane Function (UPF), data analytics, high performance computing (HPC), and CPU based AI/ML workloads. C6in instances are available in 10 different sizes with up to 128 vCPUs, including bare metal size. Amazon EC2 sixth-generation x86-based network optimized EC2 instances deliver up to 100Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth, and up to 400K IOPS. C6in instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. C6in instances are available in these AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Middle East (Bahrain, UAE), Israel (Tel Aviv), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Africa (Cape Town), South America (Sao Paulo), Canada (Central), and AWS GovCloud (US-West, US-East). To learn more, see the Amazon EC2 C6in instances. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Meta’s Llama 4 now available in Amazon SageMaker JumpStart
The first models in the new Llama 4 herd of models—Llama 4 Scout 17B and Llama 4 Maverick 17B—are now available on AWS. You can access Llama 4 models in Amazon SageMaker JumpStart. These advanced multimodal models empower you to build more tailored applications that respond to multiple types of media. Llama 4 offers improved performance at lower cost compared to Llama 3, with expanded language support for global applications. Featuring mixture-of-experts (MoE) architecture, these models deliver efficient multimodal processing for text and image inputs, improved compute efficiency, and enhanced AI safety measures.\n According to Meta, the smaller Llama 4 Scout 17B model, is the best multimodal model in the world in its class, and is more powerful than Meta’s Llama 3 models. Scout is a general-purpose model with 17 billion active parameters, 16 experts, and 109 billion total parameters that delivers state-of-the-art performance for its class. Scout significantly increases the context length from 128K in Llama 3, to an industry leading 10 million tokens. This opens up a world of possibilities, including multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast code bases. Llama 4 Maverick 17B is a general-purpose model that comes in both quantized (FP8) and non-quantized (BF16) versions, featuring 128 experts, 400 billion total parameters, and a 1 million context length. It excels in image and text understanding across 12 languages, making it suitable for versatile assistant and chat applications. Meta’s Llama 4 models are available in Amazon SageMaker JumpStart in the US East (N. Virginia) AWS Region. To learn more, read the launch blog and technical blog. These models can be accessed in the Amazon SageMaker Studio.
Amazon Bedrock Guardrails announces new capabilities to safely build generative AI applications
Amazon Bedrock Guardrails announces new capabilities to safely build generative AI applications at scale. These new capabilities offer greater flexibility, finer-grained control, and ease of use while using the configurable safeguards provided by Bedrock Guardrails aligning with use cases and responsible AI policies.\n Bedrock Guardrails now offers a detect mode that provides a preview of the expected results from your configured policies allowing you to evaluate the effectiveness of the safeguards before deploying them. This enables faster iteration and accelerating time-to-product with different combinations and strengths of policies, allowing you to fine-tune your guardrails before deployment. Guardrails now offers more configurability with options to enable policies on input prompts, model responses, or both prompts and responses - a significant improvement over the previous default setting where policies were automatically applied to both inputs and outputs. Providing finer-grained control enables you to selectively apply the safeguards to make them work for you. Bedrock Guardrails offers sensitive information filters that detect personally identifiable information (PIIs) with two modes: Block where requests containing sensitive information are completely blocked, and Mask where sensitive information is redacted and replaced with identifier tags. You can now use either of these two modes for both input prompts and model responses giving you flexibility and ease of use to safely build generative AI applications at scale. These new capabilities are available in all AWS regions where Amazon Bedrock Guardrails is supported. To learn more, see the blog post, technical documentation and the Bedrock Guardrails product page.
Amazon Bedrock now offers Pixtral Large 25.02, a multimodal model from Mistral AI
AWS announces the availability of Pixtral Large 25.02 in Amazon Bedrock, a 124B parameter model with multimodal capabilities that combines state-of-the-art image understanding with powerful text processing. AWS is the first cloud provider to deliver Pixtral Large 25.02 as a fully managed, serverless model. This model delivers frontier-class performance across document analysis, chart interpretation, and natural image understanding tasks, while maintaining the advanced text capabilities of Mistral Large 2.\n With a 128K context window, Pixtral Large 25.02 achieves best-in-class performance on key benchmarks including MathVista, DocVQA, and VQAv2. The model features comprehensive multilingual support across dozens of languages and is trained on over 80 programming languages. Key capabilities include advanced mathematical reasoning, native function calling, JSON outputting, and robust context adherence for Retrieval Augmented Generation (RAG) applications.
Pixtral Large 25.02 is now available in Amazon Bedrock in seven AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Dublin), Europe (Paris), and Europe (Stockholm). For more information on supported Regions, visit the Amazon Bedrock Model Support by Regions guide.
To learn more about Pixtral Large 25.02 and its capabilities, visit the Mistral AI product page. To get started with Pixtral Large 25.02 in Amazon Bedrock, visit the Amazon Bedrock console.
Amazon S3 Tables are now available in four additional AWS Regions
Amazon S3 Tables are now available in four additional AWS Regions: Asia Pacific (Osaka), Europe (Paris), Europe (Spain), and US West (N. California). S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale.\n With this expansion, S3 Tables are now generally available in nineteen AWS Regions. To learn more, visit the product page, documentation, and the S3 pricing page.
Today, Amazon introduces Amazon Nova Sonic, a new foundation model that unifies speech understanding and generation into a single model, to enable human-like voice conversations in artificial intelligence (AI) applications. Amazon Nova Sonic enables developers to build real-time conversational AI applications in Amazon Bedrock, with industry-leading price performance and low latency. It can understand speech in different speaking styles and generate speech in expressive voices, including both masculine-sounding and feminine-sounding voices, in English accents including American and British. Amazon Nova Sonic’s novel architecture can adapt the intonation, prosody, and style of the generated speech response to align with the context and content of the speech input. Additionally, Amazon Nova Sonic allows for function calling and knowledge grounding with enterprise data using Retrieval-Augmented Generation (RAG). Amazon Nova Sonic is developed with responsible AI in mind and features built-in protections including content moderation and watermarking.\n To help developers build real-time application with Amazon Nova Sonic, AWS is also announcing the launch of a new bidirectional streaming API in Amazon Bedrock. This API enables two-way streaming of content, which is critical for low latency interactive communication between a human user and the AI model.
Amazon Nova Sonic can be used to voice-enable virtually any application. It has been extensively tested for a wide range of applications, including enabling customer service call automation at contact centers, outbound marketing, voice-enabled personal assistants and agents, and interactive education and language learning.
The Amazon Nova Sonic model is now available in Amazon Bedrock in the US East (N. Virginia) AWS Region. To learn more, read the AWS News Blog, Amazon Nova Sonic product page, and Amazon Nova Sonic User Guide. To get started with the Amazon Nova Sonic in Amazon Bedrock, visit the Amazon Bedrock console.
AWS SAM now supports Amazon API Gateway Custom Domain Names for private REST APIs
AWS Serverless Application Model (AWS SAM) now supports custom domain names for private REST APIs feature of Amazon API Gateway. Developers building serverless applications using SAM can now seamlessly incorporate custom domain names for private APIs directly in their SAM templates, eliminating the need to configure custom domain names separately using other tools.\n API Gateway allows you to create a custom domain name, like private.example.com, for your private REST APIs, enabling you to provide API callers with a simpler and intuitive URL. With a private custom domain name, you can reduce complexity, configure security measures with TLS encryption, and manage the lifecycle of the TLS certificate associated with your domain name. AWS SAM is a collection of open-source tools (e.g. SAM, SAM CLI) that make it easy for you to build and manage serverless applications through the authoring, building, deploying, testing, and monitoring phases of your development lifecycle. This launch enables you to easily configure custom domain names for your private REST APIs using SAM and SAM CLI. To get started, update SAM CLI to the latest version and modify your SAM template to set the EndpointConfiguration to PRIVATE and specify a policy document in the Policy field in the Domain property of the AWS::Serverless::Api resource. SAM will then automatically generate DomainNameV2 and BasePathMappingV2 resources under AWS::Serverless::Api. To learn more, visit the AWS SAM documentation. You can learn more about custom domain name for private REST APIs in API Gateway blog post.
Amazon EC2 M8g instances now available in AWS Asia Pacific (Mumbai) and AWS Asia Pacific (Hyderabad)
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Asia Pacific (Mumbai) and AWS Asia Pacific (Hyderabad) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.\n AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 M8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
AWS Blogs
AWS Japan Blog (Japanese)
- Hannover Messe 2025 AWS Booth Report
- Cost optimization for building AI models with Amazon EC2 and SageMaker AI
- EBA FinOps Party case study: Sky Co., Ltd. is expected to optimize costs by 20% per year for workshop participating teams through quick win optimization, which is a short-term measure
AWS News Blog
- AWS announces Pixtral Large 25.02 model in Amazon Bedrock serverless
- Introducing Amazon Nova Sonic: Human-like voice conversations for generative AI applications
- Amazon Bedrock Guardrails enhances generative AI application safety with new capabilities
AWS Architecture Blog
AWS Big Data Blog
AWS Compute Blog
AWS DevOps & Developer Productivity Blog
AWS HPC Blog
AWS Machine Learning Blog
- How iFood built a platform to run hundreds of machine learning models with Amazon SageMaker Inference
- Build an enterprise synthetic data strategy using Amazon Bedrock
Networking & Content Delivery
Open Source Project
AWS CLI
Amplify for Android
Amplify UI
- @aws-amplify/ui-vue@4.3.1
- @aws-amplify/ui-react-storage@3.10.0
- @aws-amplify/ui-react-notifications@2.2.7
- @aws-amplify/ui-react-native@2.5.1
- @aws-amplify/ui-react-liveness@3.3.7
- @aws-amplify/ui-react-geo@2.2.7
- @aws-amplify/ui-react-core-notifications@2.2.7
- @aws-amplify/ui-react-core@3.4.1
- @aws-amplify/ui-react-ai@1.4.0
- @aws-amplify/ui-react@6.11.0