5/4/2026, 12:00:00 AM ~ 5/5/2026, 12:00:00 AM (UTC)
Recent Announcements
AWS Entity Resolution launches support for incremental Machine Learning based matching workflows
AWS Entity Resolution launches support for Machine Learning (ML) based Incremental Matching workflows in General Availability, fundamentally transforming how enterprises process entity resolution at scale. Previously, adding even a single new record required customers to reprocess their entire dataset—a process that could take up to 2 days and cost thousands of dollars. This created a critical bottleneck that forced major businesses to seek costly workarounds or alternative solutions. \n With this enhancement, AWS Entity Resolution enables businesses to process only the new records added since their last workflow run. This launch provides dramatic efficiency gains: processing 1M incremental records in less than 1 hour which is a 95% reduction in processing time compared to current workloads , while also significantly reducing infrastructure costs. The feature supports incremental workloads up to 50M incremental records over datasets containing up to 1 billion historical base records, making AWS Entity Resolution viable for continuous, large-scale enterprise workloads that were previously economically unfeasible.
You can start using incremental ML workflows in all AWS Regions where AWS Entity Resolution is available. For more information on starting an incremental ML workflow, see our user guide. For more information about AWS Entity Resolution, visit our product page.
Amazon Quick generates dashboards from natural language prompts
Amazon Quick now generates dashboards from natural language prompts with Generate Analysis. You describe the dashboard you want, select up to three datasets, and review an editable plan before generation. Amazon Quick then produces organized sheets with visuals selected for your data, filter controls for exploring by different dimensions, and calculated fields such as year-over-year growth and month-over-month comparisons.. Generate Analysis reduces dashboard creation from hours of manual configuration to minutes.\n With Generate Analysis, you can describe goals such as “create a sales performance dashboard with revenue trends, regional comparisons, and month-over-month growth” and receive a dashboard ready for refinement. The output works with existing publishing workflows, embedding, CI/CD pipelines, and point-and-click editing.
At launch, Generate Analysis is available to Enterprise subscription/Author Pro users. Authors also have promotional access to this capability through December 2026 as part of Amazon Quick Enterprise, provided their organization has not restricted access. Generate Analysis is now generally available in all AWS Regions where Amazon Quick is available.
To learn more, see Generating an analysis with natural language prompts in the Amazon Quick User Guide. To get started, open any dataset in Amazon Quick and choose Generate analysis.
Amazon Aurora DSQL now supports the JSON data type with compression
Amazon Aurora DSQL introduces support for the PostgreSQL JSON data type with optional compression. With JSON data type support, you can now use code and tools that depend on PostgreSQL’s JSON type with Aurora DSQL without modification, making it easier to store semi-structured data alongside relational data.\n You can use the JSON data type when creating or modifying tables to store semi-structured data such as API payloads, configuration objects, or event logs. With PostgreSQL compression enabled by default, larger JSON payloads are stored more efficiently, helping reduce storage costs. For details on the supported data types, see the Aurora DSQL documentation. Get started with Aurora DSQL for free with the AWS Free Tier. For information about Regional availability, see the AWS Region table. To learn more about Aurora DSQL, visit the webpage.
Amazon Quick introduces Dataset Q&A for conversational analytics against enterprise data
Amazon Quick now supports Dataset Q&A — a conversational analytics capability that enables users to ask natural language questions directly against their enterprise data. Alongside Dashboard Q&A, Dataset Q&A provides a powerful new way to interact with data in Amazon Quick — letting anyone with dataset access explore their data and get meaningful, actionable insights using natural language, while respecting all governance rules including Row Level and Column Level Security policies set by data owners..\n Dataset Q&A is powered by Amazon Quick’s text-to-SQL agent, which interprets user questions, identifies the right data, and generates precise SQL — all in a single conversational step. The agent works across various data sources users bring into Amazon Quick — generating engine- and dialect-aware optimized SQL against SPICE or AWS data assets such as Amazon Redshift, Amazon Athena, Aurora PostgreSQL, and Apache Iceberg tables stored in Amazon S3 table buckets. Data owners can enrich their datasets with custom instructions, business definitions, and field descriptions directly in Amazon Quick or through simple file uploads. These curated semantics, together with dataset metadata, are ingested into a knowledge graph that captures the meaning and relationships across data assets, enabling Quick’s orchestrator to accurately identify the most relevant datasets and generate the accurate SQL. The Dataset Q&A agent delivers accurate answers across a broad range of question types — from trend analysis and time-series comparisons to ranking, multi-condition analytical queries, and open-ended exploratory questions. Dataset Q&A also includes an Explain capability, allowing users to step through the reasoning behind each answer, inspect the underlying logic, and validate that the generated SQL correctly interprets their question before acting on the result.
Dataset Q&A is now generally available in all AWS Regions where Amazon Quick is available. To get started, see this blog post.
Amazon Quick now supports S3 tables bucket as a data source
Amazon Quick now supports Amazon S3 table buckets as a data source — enabling users to build dashboards, run conversational analytics, and explore Apache Iceberg tables stored in S3 table buckets. With no intermediate data warehouse or OLAP layers required, users can now interoperate with their lakehouse data in Amazon Quick for both agentic AI and BI workloads — all through a simplified data architecture.\n Paired with Zero-ETL from sources like Salesforce, SAP, and Amazon Kinesis Data Firehose directly into S3 table buckets, users get near real-time insights with minimal pipeline dependencies. Getting started is straightforward: admins configure S3 table bucket permissions once, and authors can immediately create datasets and start building. S3 table bucket datasets are fully accessible through Amazon Quick’s Dataset Q&A — ask a natural language question and get answers grounded in your data lake as the source of truth.
Amazon S3 table buckets as a data source in Amazon Quick is now available in all AWS Regions where Amazon Quick is available. To get started, see this blog post.
Amazon EventBridge supports data plane logging to AWS CloudTrail
Today, Amazon EventBridge announces support for logging data plane APIs using AWS CloudTrail, enabling customers to have greater visibility into event bus activity in their AWS account for best practices in security and operational troubleshooting. Amazon EventBridge is a serverless event bus that enables customers to build event-driven applications at scale using events from AWS services, integrated SaaS applications, and custom sources.\n CloudTrail captures API activities related to Amazon EventBridge as events, including calls from the Amazon EventBridge console and calls made programmatically using Amazon EventBridge APIs. Using the information that CloudTrail collects, you can identify a specific request to an Amazon EventBridge API, the IP address of the requester, the requester’s identity, and the date and time of the request. Logging EventBridge APIs using CloudTrail helps you enable operational and risk auditing, governance, and compliance of your AWS account. With the introduction of data plane logging support, the EventBridge PutEvents API is now logged to CloudTrail. To opt-in for CloudTrail logging of the above mentioned data plane APIs, you can simply configure logging on your event bus using the AWS CloudTrail Console or by using CloudTrail APIs. Logging data plane EventBridge APIs using AWS CloudTrail is now available in all commercial AWS Regions, AWS GovCloud (US) Regions, the Amazon Web Services China (Beijing) Region, operated by Sinnet, and the Amazon Web Services China (Ningxia) Region, operated by NWCD. To learn more about logging data plane APIs using AWS CloudTrail, see AWS Documentation. For more information about CloudTrail, see the AWS CloudTrail User Guide.
Amazon Quick upgrades the extension for Microsoft Outlook (Preview)
Today, AWS announces the preview of the Amazon Quick extension for Microsoft Outlook, which brings generative AI-powered productivity directly into your email and calendar workflows. With the extension, you can use natural language to summarize unread messages, organize your inbox, schedule meetings, and draft in-line responses all without leaving Outlook.\n The Quick extension for Outlook helps you focus on what matters most by prioritizing emails, searching for specific discussions, and organizing messages into folders or flagging them for follow-up. Using conversational instructions, you can find optimal meeting times with coworkers and schedule meetings. For email threads, you can generate summaries, extract action items, and draft contextual replies that pull in relevant information from your Amazon Quick spaces and knowledge bases. You can also trigger actions in external applications using your configured integrations directly from Outlook.
The Amazon Quick extension for Microsoft Outlook is available in preview in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Ireland), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (London).
To get started with Amazon Quick, visit the Quick website, and sign up for an account in minutes. Read the documentation to learn more, and install the Quick extension for Outlook from the Quick download page.
Amazon SageMaker AI launches AI agent experience for model customization
Amazon SageMaker AI now features an agentic experience that transforms model customization from a months-long process into a workflow completed in days or hours. Customers building an AI solution need to carefully frame their use case goals and success criteria, prepare data, choose the right models, configure, run, and analyze multiple experiments with various models and fine tuning techniques. Once a suitable model candidate that meets the success criteria is identified, they need to figure out the most cost performant way to deploy the model. Throughout this workflow customers need to manage the undifferentiated heavy lifting of setting up the infrastructure to train and deploy the models. The new capability now enables developers to use natural language interactions with coding agents to streamline the entire journey from use case definition to production deployment of a high quality model.\n The agentic experience, based on SageMaker AI model customization agent skills, delivers expertise on fine-tuning applied to a builder’s specific use case, transformation to the required data formats, comprehensive quality evaluation using LLM-as-a-judge metrics, and flexible deployment options to Amazon Bedrock or SageMaker AI endpoints. Customers can install these skills in any IDE of their choice, such as Visual Studio and Cursor. Developers can work with multiple coding agents including Kiro, Claude Code, and CoPilot, in order to optimize popular model families like Amazon Nova, Llama, Qwen, and GPT-OSS. The experience generates reusable, editable code artifacts for transparency, reproducibility, and automation through integration into AIOps pipelines Install SageMaker AI skills in your favorite IDE using the sagemaker-ai agent plugin. SageMaker AI model customization skills are also available and pre-installed in SageMaker Studio Notebooks, along with the Kiro coding agent. All you need to do is just sign up for Kiro subscription, open the chat window in Studio Notebooks and start chatting with the agent to build the workflow. The experience supports advanced customization techniques including supervised fine-tuning for instruction tuning, direct Preference Optimization for adjusting tone and preference selections, and Reinforcement Learning for use cases with verifiable correctness. To learn more about model customization with the AI agent experience in Amazon SageMaker AI, visit the SageMaker model customization documentation.
AWS Payment Cryptography announces support for cross account key sharing
AWS Payment Cryptography now supports cross account sharing of keys using resource-based policies (RBP). With this new feature, customers can more easily manage cryptographic keys across multiple accounts both internal and external to their company, providing more flexibility to manage keys at scale. With AWS Payment Cryptography, you can simplify cryptography operations in your cloud-hosted payment applications with a service that grows elastically with your business and has been assessed as compliant with PCI PIN Security and Point-to-Point Encryption (P2PE) requirements.\n Many customers utilize multiple AWS accounts to delineate different workloads, applications or use cases for payment processing following AWS PCI DSS Guidance. While this pattern is also common with traditional infrastructure, this often leads to duplicating cryptographic material, making lineage and access controls more difficult overall. With the launch of Payment Cryptography integration with RBP, customers can keep a single copy of key material and leverage concise, per-resource access control to enable cross account access without relying on import/export flows.
This feature is available across all AWS Regions where AWS Payment Cryptography is available. To learn more about this feature or to get started with the service, consult the AWS Payment Cryptography user guide.
Amazon RDS for SQL Server now supports M8i and R8i instances
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports M8i and R8i instances. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and R8i instances offer up to 15% better price-performance and 2.5x more memory bandwidth compared to equivalent 7th generation Intel-based instances.\n To use the new M8i and R8i instances, you can modify your existing RDS database instance or create a new RDS database instance from the RDS Management Console, or using the AWS SDK or CLI. See Amazon RDS for SQL Server Pricing for up-to-date pricing and regional availability.
Amazon RDS for SQL Server supports read replica with additional storage volumes
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports read replicas for database instances with additional storage volumes. Additional storage volumes allow customers to scale database storage up to 256 TiB by adding up to three storage volumes, each with up to 64 TiB, in addition to the primary storage volume. With this launch, for database instances configured with additional storage volumes, customers can create same-region and cross-region read replica database instances.\n When a read replica is created for a database instance with additional storage volumes, the replica preserves the storage layout of the source instance, including the configuration of any additional storage volumes. After the initial creation, you can independently manage additional storage volume configurations on the source and read replica instances. Read replicas with additional storage volumes are available in all AWS commercial Regions and the AWS GovCloud (US) Regions. Customers can start using this feature today through the AWS Management Console, AWS CLI, or AWS SDKs. To learn more, see Working with read replicas for Amazon RDS for SQL Server and Working with storage in RDS for SQL Server in the Amazon RDS User Guide.
AWS Blogs
AWS News Blog
AWS Open Source Blog
AWS Database Blog
- How Amazon DocumentDB on AWS Graviton4 R8g instances delivers 63% better Sysbench benchmark results
- Connect to Amazon RDS for Db2 from your laptop
- Troubleshoot Amazon RDS for Oracle to Amazon Redshift DMS migrations with AWS DevOps Agent
AWS Developer Tools Blog
AWS HPC Blog
AWS for Industries
Artificial Intelligence
- Beyond BI: How the Dataset Q&A feature of Amazon Quick powers the next generation of data decisions
- Introducing the agent quality loop: AgentCore Optimization now in preview
- Agent-guided workflows to accelerate model customization in Amazon SageMaker AI
- Generate dashboards from natural language prompts in Amazon Quick
- From data lake to AI-ready analytics: Introducing new data source with S3 Tables in Amazon Quick
- Introducing Dataset Q&A: Expanding natural language querying for structured datasets in Amazon Quick
- Capacity-aware inference: Automatic instance fallback for SageMaker AI endpoints
Networking & Content Delivery
- Tag-based invalidation in Amazon CloudFront
- Manage caches with precision using Amazon CloudFront Invalidation by Cache Tag
- Selecting the Right AWS VPN Solution: A Decision Framework