5/1/2025, 12:00:00 AM ~ 5/2/2025, 12:00:00 AM (UTC)
Recent Announcements
AI search flow builder is now available on the Amazon OpenSearch Service
An AI search flow builder is now available on Amazon OpenSearch Service when you launch OpenSearch 2.19+ domains. This new feature allows you to design and run AI search flows through a low-code experience free from custom middleware. It provides a low-code designer accessible through the AI search flows section of OpenSearch dashboards.\n Using the builder, you can create sophisticated search flows enhanced by AWS and third-party AI services, and simply run then with a single query. Previously, you had develop custom code to integrate AI into advanced workflows and manage them on custom middleware. Now, you can use the low-code designer and automation APIs to run and customize search flows on OpenSearch for various use cases, such as retrieval augmented generation (RAG), dynamic query rewriting, reranking, semantic and multi-modal encoding, and more. The AI search flow builder is available in all regions that support OpenSearch 2.19+ on the Amazon OpenSearch Service. You can learn more from the documentation, and tutorials, including how to integrate various AI models from Amazon Bedrock, Amazon SageMaker, and other AWS and third-party AI services.
Amazon Connect now publishes post-contact completion events to Contact Event Stream
Amazon Connect now publishes a new contact completion event in the Contact Event Stream (via Amazon EventBridge), delivering real-time insights into when a contact has fully concluded, including the completion of any after-contact work (ACW). This new event empowers contact centers with full-lifecycle visibility into customer and agent interactions, enabling smarter, faster downstream actions. For example, if you were to automatically create a follow-up ticket once an agent finishes their wrap-up work, this event gives you a precise, real-time signal to trigger that workflow, ensuring your systems stay in sync and your customer service stays responsive.\n To learn more, refer to the Contact events - Amazon Connect. This new event is available in all AWS regions where Amazon Connect is available. Amazon Connect contact events do not incur additional Amazon Connect charges. You may incur charges for Amazon EventBridge usage. Please see Amazon EventBridge Pricing for more information. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.
Amazon Connect adds enhanced contact information to DescribeContact API
Amazon Connect now provides richer contact information through the DescribeContact API, enabling your contact center to take smarter, faster action during and after customer interactions. This update surfaces key insights such as disconnect reasons, recording status, after-contact work time, and custom contact attributes, all in a single API response, helping to reduce complexity and improve performance. For example, a customer chat session might disconnect due to a network issue on the agent’s end. With the new DisconnectReason field in the DescribeContact API, you can now programmatically detect this and re-queue the chat for follow-up, ensuring the customer gets help without having to restart the conversation.\n To learn more, refer to our public documentation. This new feature is available in all AWS regions where Amazon Connect is available. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.
Amazon VPC IPAM now allows cost distribution to AWS Organization member-accounts
Today, AWS announced the ability for Amazon VPC IP Address Manager (IPAM) to distribute IPAM costs to AWS Organizations member accounts. This allows you to easily allocate costs to your internal teams for their IPAM usage.\n VPC IPAM makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. When you enable IPAM for your AWS Organization, IPAM aggregates the organization-wide IP address usage, and charges the AWS account in which IPAM is created. With this launch, you can allocate the charges directly to AWS Organizations member accounts, for their individual usage. For example, you may have IPAM enabled in a central AWS account that runs multiple networking services, and want to allocate the IPAM charges across your internal teams, which you can do easily using this feature. This feature is now available in all AWS Regions where Amazon VPC IPAM is supported, including AWS China Regions, and AWS GovCloud (US) Regions. To learn more about IPAM, view the IPAM documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.
Amazon RDS for MySQL now supports new minor versions 8.0.42 and 8.4.5
Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor versions 8.0.42 and 8.4.5, the latest minors released by the MySQL community. We recommend upgrading to the newer minor versions to fix known security vulnerabilities in prior versions of MySQL and to benefit from bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.42 and 8.4.5 in the Amazon RDS user guide.\n You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide. Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL database in the Amazon RDS Management Console.
Amazon CloudWatch launches tiered pricing and additional destinations for AWS Lambda logs
Today, Amazon CloudWatch launched volume tiered pricing for AWS Lambda logs and support for additional delivery destinations. The new tiered pricing is effective immediately on Lambda function logs, requiring no code or configuration changes. For example in US East (N. Virginia), Lambda logs to CloudWatch pricing starts at $0.50 per GB tiering down to $0.05 per GB.\n Additionally, CloudWatch now supports Amazon S3 and Amazon Data Firehose as Lambda log delivery destinations. These new destinations provide additional flexibility in Lambda log management, and are also available at volume tiered pricing. Again in US East (N. Virginia), pricing starts at $0.25 per GB tiering down to $0.05 per GB. CloudWatch Logs volume tiered pricing is available in all AWS Regions where CloudWatch Logs and Lambda are available. To learn more about these launches, visit the documentation, launch blog post, and the CloudWatch Logs pricing page.
AWS Launch Wizard automates multi-node SAP NetWeaver deployment on SAP ASE Database
AWS Launch Wizard now supports deployment of multi-node SAP NetWeaver applications on SAP ASE database, allowing you to deploy multiple applications servers at the time of deployment. With this deployment pattern, you deploy SAP application and ASE database components on different EC2 instances to meet your application performance requirements. This launch expands on existing Launch Wizard capability that allows you to automate deployment of SAP systems on SAP ASE database in single-node pattern.\n AWS Launch Wizard offers a guided way of sizing, configuring, deploying, and scaling AWS resources for third party applications, such as Microsoft SQL Server and SAP systems, without the need to manually identify and provision individual AWS resources. This launch brings parity in supported deployment patterns of SAP NetWeaver on HANA database stack and SAP NetWeaver on ASE database stack. To learn more about AWS Launch Wizard, visit the Launch Wizard Page.
AWS HealthImaging now supports DICOM video data and JPEG 2000 transcoding
AWS HealthImaging announces two enhancements that make it easier to manage diverse medical imaging data in the cloud.\n First, HealthImaging now supports video data, encoded per the DICOM standard. With this launch, video data can be stored in a HealthImaging data store, alongside still image data. The service supports the DICOM video formats: MPEG2, MPEG-4 AVC/H.264, HEVC/H.265, corresponding to DICOM transfer syntax UIDs 1.2.840.10008.1.2.4.100 through 1.2.840.10008.1.2.4.108. This data can be retrieved as DICOM instances (.dcm files) and directly as video objects. For more information, see the documentation. Second, HealthImaging has added support for retrieving lossless images in the JPEG 2000 lossless format (transfer syntax UID 1.2.840.10008.1.2.4.90). The service supports retrieving both DICOM instances (.dcm files) and image frames in the JPEG 2000 lossless format. HealthImaging’s transcoding to JPEG 2000 makes it easier to interoperate with external applications that consume data in this widely adopted format. AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing total cost of ownership. To learn more, see the AWS HealthImaging Developer Guide. AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).
Amazon Aurora now supports PostgreSQL major version 17
Amazon Aurora now supports PostgreSQL major version 17 (17.4). This release contains product improvements and bug fixes from the PostgreSQL community along with Aurora- specific feature improvements such as enhanced memory management, faster storage metadata initialization during failovers, and optimized write-heavy workloads on new Graviton 4 high-end instances. This release also includes new features for Babelfish, Aurora-specific security fixes, and updates to key extensions including pgvector 0.8.0 and postgis 3.5.1. Please refer to the PostgreSQL community announcement and Amazon Aurora PostgreSQL updates for more details about the release.\n To use the new version, create a new Aurora PostgreSQL-compatible database with just a few clicks in the Amazon RDS Management Console. Please review the Aurora documentation to learn more about upgrading and refer the Aurora version policy to help decide how often to upgrade and plan your upgrade process. PostgreSQL 17.4 is available in all commercial AWS Regions and AWS GovCloud (US) Regions. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
Amazon Connect Contact Lens launches new real-time adherence dashboard
Amazon Connect Contact Lens now include a pre-configured agent adherence widget which supports filtering and sorting on agent adherence metrics, making day-to-day adherence management more efficient for supervisors. With this launch, supervisors can apply filters on adherence status, duration, and percentage; sort by duration or percentage; and apply conditional formatting within the agent adherence widget on the queue and agent performance dashboard. For example, a supervisor can highlight agents who have been falling behind schedule for more than 5 minutes, quickly identify breaches, and notify the agents accordingly. With this widget, supervisors can simplify the process of monitoring adherence, improving productivity, and enabling faster response times to adherence issues.\n This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about this feature, see the Queue and agent performance dashboard in Amazon Connect.
AWS B2B Data Interchange now supports IPv6 on B2B Data Interchange Service APIs
AWS B2B Data Interchange now offers customers the option to use Internet Protocol version 6 (IPv6) while accessing the AWS B2B Data Interchange Service APIs.\n More and more customers are adopting IPv6 to mitigate IPv4 address exhaustion in their private networks or to satisfy government mandates such as the US Office of Management and Budget (OMB) M-21-07 memorandum. With this launch, customers can standardize their applications and workflows for managing their AWS B2B Data Interchange resources on the new version of Internet Protocol by using the new dual-stack AWS B2B Data Interchange Service endpoints. IPv6 support for AWS B2B Data Interchange Service APIs is available in all commercial regions where AWS B2B Data Interchange is available. To learn more, visit the AWS B2B Data Interchange user guide.
Amazon Bedrock Model Distillation is now generally available
Model Distillation is the process of transferring knowledge from a more capable model (teacher) to a less capable one (student) with the goal to make the faster and cost-efficient student model as performant as the teacher for a specific use-case. With general availability, we now add support for the following new models: Amazon Nova Premier (teacher) and Nova Pro (student), Claude 3.5 Sonnet v2 (teacher), Llama 3.3 70B (teacher) and Llama 3.2 1B/3B (student). Amazon Bedrock Model Distillation now enables smaller models to accurately predict function calling for Agents use cases while helping to deliver substantially faster response times and lower operational costs. Distilled models in Amazon Bedrock are up to 500% faster and 75% less expensive than original models, with less than 2% accuracy loss for use cases like RAG. In addition to RAG use cases, Model Distillation also adds support for data augmentation for Agents use cases for function calling prediction.\n Amazon Bedrock Model Distillation offers a single workflow that automates the process needed to generate teacher responses, adds data synthesis to improve teacher responses, and then trains the student model. Amazon Bedrock Model Distillation may choose to apply different data synthesis methods that are best suited for your use-case to create a distilled model that approximately matches the advanced model for the specific use-case. Learn more in our documentation, website, and blog.
Amazon Q Developer in chat applications now supports AWS Systems Manager node access approvals
Amazon Q Developer in chat applications now supports AWS Systems Manager just-in-time node access approvals from Microsoft Teams and Slack. AWS customers can now monitor node access requests and approvals from chat channels to enhance security posture and meet compliance requirements.\n The Just-in-time node access provides customers policy-based time-bound access to nodes and helps them comply with zero-standing privileges operations model. This launch provides a seamless integration for managing Just-in-time access request approvals in chat applications. When configuring Just-in-time approval policies, customers can designate Amazon SNS topics associated with Amazon Q Developer in chat applications configurations for managing node access approval requests. As operators make new node access requests, approvers are notified about the requests in the chat channels. They can then approve or reject access requests directly from the chat channel. Systems Manager node access approval management in chat applications is available at no additional cost in AWS Regions where Amazon Q Developer and System Manager Just-in-time node access are offered. Visit the user guide and Systems Manager pricing to get started.
AWS Announces Managed Support for Energy Data Insights
Today, AWS announced managed support for Energy Data Insights (EDI) on AWS - delivered through AWS Managed Service (AMS), which enables energy customers to easily deploy, manage, and operate their subsurface data management platform on AWS, in compliance with the (OSDU®) standard. Now, you can automatically deploy EDI on AWS and accelerate your data ingestion from weeks to hours, and intelligently process and organize your subsurface data with minimal manual effort. AWS extends your team with operational capabilities, allowing you to focus on innovation and accelerating time to value with your subsurface data.\n With AWS-provided managed support, EDI on AWS removes the undifferentiated heavy-lifting and the complexities of deploying, operating, and maintaining an OSDU Data Platform on AWS, optimizing your EDI operations and security while ensuring round-the-clock availability and protection of the service. AWS handles critical operations on your behalf such as incident management, and backup and restore, significantly improving the resilience of your OSDU Data Platform on AWS. You also receive timely support for application upgrades and patches, allowing you to stay current with the latest features and improvements.
EDI on AWS is available with pay-as-you-go pricing in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Ireland), Europe (Paris), and South America (São Paulo).
To learn more about EDI, visit the Product Detail page.
Amazon Route 53 Resolver DNS Firewall is now available in additional regions
Starting today, you can use Amazon Route 53 Resolver DNS Firewall and DNS Firewall Advanced in the Asia Pacific (Thailand) and Mexico (Central) Regions, to govern and filter outbound DNS traffic for your Amazon Virtual Private Cloud (VPC).\n Route 53 Resolver DNS Firewall is a managed service that enables you to block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. In addition, Route 53 Resolver DNS Firewall Advanced is a capability of DNS Firewall that allows you to detect and block DNS traffic associated with Domain Generation Algorithms (DGA) and DNS Tunneling threats. DNS Firewall can be enabled only for Route 53 Resolver, which is a recursive DNS server that is available by default in all Amazon Virtual Private Clouds (VPCs). The Route 53 Resolver responds to DNS queries from AWS resources within a VPC for public DNS records, VPC-specific domain names, and Route 53 private hosted zones. See here for the list of AWS Regions where Route 53 Resolver DNS Firewall is available. Visit our product page and documentation to learn more about Amazon Route 53 Resolver DNS Firewall and its pricing.
Amazon Neptune Database now supports Graviton3 R7g and Graviton4 R8g instances
Amazon Neptune Database now supports Graviton3-based R7g and Graviton4-based R8g database instances for Amazon Neptune engine versions 1.4.5 or above, and priced -16% vs R6g.\n Graviton3-based R7g are the first AWS database instances to feature the latest DDR5 memory, which provides 50% more memory bandwidth compared to DDR4, enabling high-speed access to data in memory. R7g database instances offer up to 30Gbps enhanced networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Graviton4-based R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. R7g instances for Neptune are now available US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (London), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Malaysia), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Spain), and South America (São Paulo). R8g instances for Neptune are now available in: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Stockholm), and Europe (Spain). You can launch R7g and R8g instances for Neptune using the AWS Management Console or using the AWS CLI. Upgrading a Neptune cluster to R7g or R8g instances requires a simple instance type modification for Neptune engine versions 1.4.5 or higher. For more information on pricing and regional availability, refer to the Amazon Neptune pricing page.
Amazon Connect adds five new metrics and dashboard drill downs for outbound campaigns
Amazon Connect outbound campaigns now offers reporting on recipients and campaign executions along with additional metrics for tracking progress and troubleshooting issues. These capabilities are available in the Contact Lens dashboards and allow you to easily monitor campaign engagement by tracking total outreach against the total number of recipients targeted. You can drill down into your campaign and examine performance data for each campaign execution - for example, if you run a campaign every week for a month, you can drill down to view campaign performance for each week. You can also identify and resolve any delivery issues against each campaign - for example, out of the 20 delivery issues, you now know 12 had ineligible timezones, and 8 reached communication limit thresholds. The real-time campaigns dashboard shows the journey of your campaign, from how many recipients you targeted to how many you reached. All new metrics are also available through the GetMetricDataV2 API and Zero-ETL data lake for custom reporting or integrations with other data sources.\n These enhanced outbound campaign analytics are available in all AWS regions where Amazon Connect outbound campaigns is available. For more information about outbound campaign analytics, consult the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect Outbound Campaigns, please visit the outbound campaigns webpage.
Amazon Q Developer announces a new agentic coding experience in the IDE
Today, Amazon Q Developer announces a new agentic coding experience within the IDE that transforms how you build software. The new experience redefines how you write, modify, and maintain code by leveraging natural language understanding to seamlessly execute complex workflows.\n The new coding experience provides intelligent task execution, enabling Q Developer to perform actions beyond code suggestions, such as modifying files, generating code diffs, and running commands based on your natural language instructions. Additionally, it offers transparent reasoning, allowing you to follow Q Developer’s thought process as it interprets your requirements and makes code changes. It also offers multi-turn conversations, allowing you to have dynamic, back and forth conversations that maintain context across your entire codebase and development session. Finally, it provides granular control, giving you the choice between automated code modifications or step-by-step review and confirmation. This experience is already available in Amazon Q Developer CLI and is powered by the latest Claude Sonnet 3.7 model. The agentic coding experience supports multiple spoken languages and is available within the Visual Studio Code IDE extension with support for JetBrains, and Eclipse coming soon. Learn more.
AWS Blogs
AWS Japan Blog (Japanese)
AWS Big Data Blog
AWS Compute Blog
AWS DevOps & Developer Productivity Blog
AWS for Industries
- Networks for AI and AI for Networks: AWS and Orange’s Journey
- Modernizing trading workloads with next-generation AWS Outposts racks
- Aligning Amazon Bedrock with NAIC AI Principles and Model Bulletin
- Modernize 5G networks with second-generation AWS Outposts racks
AWS Machine Learning Blog
- Best practices for Meta Llama 3.2 multimodal fine-tuning on Amazon Bedrock
- Extend large language models powered by Amazon SageMaker AI using Model Context Protocol
- Automate document translation and standardization with Amazon Bedrock and Amazon Translate
- Autonomous mortgage processing using Amazon Bedrock Data Automation and Amazon Bedrock Agents
- Amazon Bedrock Model Distillation: Boost function calling accuracy while reducing cost and latency