12/3/2024, 12:00:00 AM ~ 12/4/2024, 12:00:00 AM (UTC)

Recent Announcements

Amazon Q Developer can now automate code reviews

Starting today, Amazon Q Developer can also perform code reviews, automatically providing comments on your code in the IDE, flagging suspicious code patterns, providing patches where available, and even assessing deployment risk so you can get feedback on your code quickly.\n Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. By automating the first round of code reviews and improving review consistency, Q Developer empowers code authors to fix issues faster, streamlining the process for both authors and reviewers. With this new capability, Q Developer can help you get immediate feedback for your code reviews and code fixes where available, so you can increase the speed of iteration and improve the quality of your code.

This capability is available in the integrated development environment (IDE) through a new chat command: /review. You can start automating code reviews via the Visual Studio Code and IntelliJ IDEA Integrated Development Environments (IDEs) with both an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing. 

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.

Amazon Bedrock now supports multi-agent collaboration

Amazon Bedrock now supports multi-agent collaboration, allowing organizations to build and manage multiple AI agents that work together to solve complex workflows. This feature allows developers to create agents with specialized roles tailored for specific business needs, such as financial data collection, research, and decision-making. By enabling seamless agent collaboration, Amazon Bedrock empowers organizations to optimize performance across industries like finance, customer service, and healthcare.\n With multi-agent collaboration on Amazon Bedrock, organizations can effortlessly master complex workflows, achieving highly accurate and scalable results across diverse applications. In financial services, for example, specialized agents coordinate to gather data, analyze trends, and provide actionable recommendations—working in parallel to improve response times and precision. This collaborative feature allows businesses to quickly build, deploy, and scale multi-agent setups, reducing development time while ensuring seamless integration and adaptability to evolving needs.

Multi-agent collaboration is currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions.

To learn more, visit Amazon Bedrock Agents.

Amazon Q Business introduces over 50 actions for popular business applications and platforms

Today, we are excited to announce that Amazon Q Business, including Amazon Q Apps, has expanded its capabilities with a ready-to-use library of over 50 actions spanning plugins across popular business applications and platforms. This enhancement allows Amazon Q Business users to complete tasks in other applications without leaving the Amazon Q Business interface, improving the user experience and operational efficiency.\n The new plugins cover a wide range of widely used business tools, including PagerDuty, Salesforce, Jira, Smartsheet, and ServiceNow. These integrations enable users to perform tasks such as creating and updating tickets, managing incidents, and accessing project information directly from within Amazon Q Business. With Amazon Q Apps, users can further automate their everyday tasks by leveraging the newly introduced actions directly within their purpose-built apps. The new plugins are available in all AWS Regions where Amazon Q Business is available. To get started with the new plugins, customers can access them directly from their Amazon Q Business interface. To learn more about Amazon Q Business plugins and how they can enhance your organization’s productivity, visit the Amazon Q Business product page or explore the Amazon Q Business plugin documentation.

Amazon Q Developer now provides transformation capabilities for .NET porting (Preview)

Today, AWS announces new generative-AI powered transformation capabilities of Amazon Q Developer in public preview to accelerate porting of .NET Framework applications to cross-platform .NET. Using these capabilities, you can modernize your Windows .NET applications to be Linux-ready up to four times faster than traditional methods and realize up to 40% savings in licensing costs.\n With this launch, Amazon Q Developer is now equipped with agentic capabilities for transformation that allow you to port hundreds of .NET Frameworks applications running on Windows to Linux-ready cross-platform .NET. Using Amazon Q Developer, you can delegate your tedious manual porting tasks and help free up your team’s precious time to focus on innovation. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives and connect it to your source code repositories. Amazon Q Developer then starts the transformation process with the assessment of your application code to identify .NET versions, supported project types, and their dependencies, and then ports the assessed application code along with their accompanying unit tests to cross-platform .NET. You and your team can collaboratively review, adjust, and approve the transformation process. Additionally, Amazon Q Developer provides a detailed work log as a documented trail of transformation decisions to support your organizational compliance objectives. The transformation capabilities of Amazon Q Developer are available in public preview via a web experience and in your Visual Studio integrated development environment (IDE). To learn more, read the blogs on the web experience and the IDE experience, and visit Amazon Q Developer transformation capabilities webpage and documentation.

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse automates the extracting and loading of data from a DynamoDB table into SageMaker Lakehouse, an open and secure lakehouse. You can run analytics and machine learning workloads on your DynamoDB data using SageMaker Lakehouse, without impacting production workloads running on DynamoDB. With this launch, you now have the option to enable analytics workloads using SageMaker Lakehouse, in addition to the previously available Amazon OpenSearch Service and Amazon Redshift zero-ETL integrations.\n Using the no-code interface, you can maintain an up-to-date replica of your DynamoDB data in the data lake by quickly setting up your integration to handle the complete process of replicating data and updating records. This zero-ETL integration reduces the complexity and operational burden of data replication to let you focus on deriving insights from your data. You can create and manage integrations using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the SageMaker Lakehouse APIs.

DynamoDB zero-ETL integration with SageMaker Lakehouse is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Stockholm), Europe (Frankfurt), and Europe (Ireland) AWS Regions. 

To learn more, visit DynamoDB integrations and read the documentation.

Announcing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadata

Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and the source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating, for example.\n S3 Metadata is designed to automatically capture metadata from objects as they are uploaded into a bucket, and to make that metadata queryable in a read-only table. As data in your bucket changes, S3 Metadata updates the table within minutes to reflect the latest changes. These metadata tables are stored in S3 Tables, the new S3 storage offering optimized for tabular data. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight. Additionally, S3 Metadata integrates with Amazon Bedrock, allowing for the annotation of AI-generated videos with metadata that specifies its AI origin, creation timestamp, and the specific model used for its generation. Amazon S3 Metadata is currently available in preview in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Amazon Bedrock Guardrails now supports Automated Reasoning checks (Preview)

With the launch of the Automated Reasoning checks safeguard in Amazon Bedrock Guardrails, AWS becomes the first and only major cloud provider to integrate automated reasoning in our generative AI offerings. Automated Reasoning checks help detect hallucinations and provide a verifiable proof that a large language model (LLM) response is accurate. Automated Reasoning tools are not guessing or predicting accuracy. Instead, they rely on sound mathematical techniques to definitively verify compliance with expert-created Automated Reasoning Policies, consequently improving transparency. Organizations increasingly use LLMs to improve user experiences and reduce operational costs by enabling conversational access to relevant, contextualized information. However, LLMs are prone to hallucinations. Due to the ability of LLMs to generate compelling answers, these hallucinations are often difficult to detect. The possibility of hallucinations and an inability to explain why they occurred slows generative AI adoption for use cases where accuracy is critical.\n With Automated Reasoning checks, domain experts can more easily build specifications called Automated Reasoning Policies that encapsulate their knowledge in fields such as operational workflows and HR policies. Users of Amazon Bedrock Guardrails can validate generated content against an Automated Reasoning Policy to identify inaccuracies and unstated assumptions, and explain why statements are accurate in a verifiable way. For example, you can configure Automated Reasoning checks to validate answers on topics defined in complex HR policies (which can include constraints on employee tenure, location, and performance) and explain why an answer is accurate with supporting evidence. Contact your AWS account team to request access to Automated Reasoning checks in Amazon Bedrock Guardrails in US East (N. Virginia) and US West (Oregon) AWS regions. To learn more, visit Amazon Bedrock Guardrails and read the News blog.

Amazon DynamoDB global tables previews multi-Region strong consistency

Starting today in preview, Amazon DynamoDB global tables now supports multi-Region strong consistency. DynamoDB global tables is a fully managed, serverless, multi-Region, and multi-active database used by tens of thousands of customers. With this new capability, you can now build highly available multi-Region applications with a Recovery Point Objective (RPO) of zero, achieving the highest level of resilience. \n Multi-Region strong consistency ensures your applications can always read the latest version of data from any Region in a global table, removing the undifferentiated heavy lifting of managing consistency across multiple Regions. It is useful for building global applications with strict consistency requirements, such as user profile management, inventory tracking, and financial transaction processing. 

The preview of DynamoDB global tables with multi-Region strong consistency is available in the following Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). DynamoDB global tables with multi-Region strong consistency is billed according to existing global tables pricing. To learn more about global tables multi-Region strong consistency, see the preview documentation. For information about DynamoDB global tables, see the global tables information page and the developer guide.

AWS Glue Data catalog now automates generating statistics for new tables

AWS Glue Data Catalog now automates generating statistics for new tables. These statistics are integrated with cost-based optimizer (CBO) from Amazon Redshift and Amazon Athena, resulting in improved query performance and potential cost savings.\n Table statistics are used by a query engine, such as Amazon Redshift and Amazon Athena, to determine the most efficient way to execute a query. Previously, creating statistics for Apache Iceberg tables in AWS Glue Data Catalog required you to continuously monitor and update configurations for your tables. Now, AWS Glue Data Catalog lets you generate statistics automatically for new tables with one time catalog configuration. You can get started by selecting default catalog in the Lake Formation console and enabling table statistics in the table optimization configuration tab. As new tables are created or existing tables are updated, statistics are generated using a sample of rows for all columns and will be refreshed periodically. For Apache Iceberg tables, these statistics include the number of distinct values (NDVs). For other file formats like Parquet, additional statistics are collected, such as the number of nulls, maximum and minimum values, and average length. Amazon Redshift and Amazon Athena use the updated statistics to optimize queries, using optimizations such as optimal join order or cost based aggregation pushdown. Glue Catalog console provides you visibility into the updated statistics and statistics generation runs.

The support for automation for AWS Glue Catalog statistics is generally available in the following AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Ireland), Asia Pacific (Tokyo) regions. Read the blog post and visit AWS Glue Catalog documentation to learn more.

Amazon S3 Access Grants now integrate with AWS Glue

Amazon S3 Access Grants now integrate with AWS Glue for analytics, machine learning (ML), and application development workloads in AWS. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta or AWS Identity and Access Management (IAM) principals, to datasets stored in Amazon S3. This integration gives you the ability to manage S3 permissions for end users running jobs with Glue 5.0 or later, without the need to write and maintain bucket policies or individual IAM roles.\n AWS Glue provides a data integration service that simplifies data exploration, preparation, and integration from multiple sources, including S3. Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in an existing corporate directory, or to IAM users and roles. When end users in the appropriate user groups access S3 using Glue ETL for Apache Spark, they will then automatically have the necessary permissions to read and write data. S3 Access Grants also automatically update S3 permissions as users are added and removed from user groups in the IdP. Amazon S3 Access Grants support is available when using AWS Glue 5.0 and later, and is available in all commercial AWS Regions where AWS Glue 5.0 and AWS IAM Identity Center are available. For pricing details, visit Amazon S3 pricing and Amazon Glue pricing. To learn more about S3 Access Grants, refer to the S3 User Guide.

Amazon SageMaker Lakehouse integrated access controls now available in Amazon Athena federated queries

Amazon SageMaker now supports connectivity, discovery, querying, and enforcing fine-grained data access controls on federated sources when querying data with Amazon Athena. Athena is a query service that makes it simple to analyze your data lake and federated data sources such as Amazon Redshift, Amazon DynamoDB, or Snowflake using SQL without extract, transform, and load (ETL) scripts. Now, data workers can connect to and unify these data sources within SageMaker Lakehouse. Federated source metadata is unified in SageMaker Lakehouse, where you apply fine-grained policies in one place, helping to streamline analytics workflows and secure your data.\n Log into Amazon SageMaker Unified Studio, connect to a federated data source in SageMaker Lakehouse, and govern data with column- and tag-based permissions that are enforced when querying federated data sources with Athena. In addition to the SageMaker Unified Studio, you can connect to these data sources through the Athena console and API. To help you automate and streamline connector set up, the new user experiences allow you to create and manage connections to data sources with ease. Now, organizations can extract insights from a unified set of data sources while strengthening security posture, wherever your data is stored. The unification and fine-grained access controls on federated sources are available in all AWS Regions where SageMaker Lakehouse is available. To learn more, visit SageMaker Lakehouse documentation.

AWS expands data connectivity for Amazon SageMaker Lakehouse and AWS Glue

Amazon SageMaker Lakehouse announces unified data connectivity capabilities to streamline the creation, management, and usage of connections to data sources across databases, data lakes and enterprise applications. SageMaker Lakehouse unified data connectivity provides a connection configuration template, support for standard authentication methods like basic authentication and OAuth 2.0, connection testing, metadata retrieval, and data preview. Customers can create SageMaker Lakehouse connections through SageMaker Unified Studio (preview), AWS Glue console, or custom-built application using APIs under AWS Glue.\n With SageMaker Lakehouse unified data connectivity, a data connection is configured once and can be reused by SageMaker Unified Studio, AWS Glue and Amazon Athena for use cases in data integration, data analytics and data science. You will gain confidence in the established connection by validating credentials with connection testing. With the ability to browse metadata, you can understand the structure and schema of the data source and identify relevant tables and fields. Lastly, the data preview capability supports mapping source fields to target schemas, identifying needed data transformation, and receiving immediate feedback on the source data queries. SageMaker Lakehouse unified connectivity is available where Amazon SageMaker Lakehouse or AWS Glue is available. To get started, visit AWS Glue connection documentation or the Amazon SageMaker Lakehouse data connection documentation.

Introducing AWS Glue 5.0

Today, we are excited to announce the general availability of AWS Glue 5.0. With AWS Glue 5.0, you get improved performance, enhanced security, support for Amazon Sagemaker Unified Studio and Sagemaker Lakehouse, and more. AWS Glue 5.0 enables you to develop, run, and scale your data integration workloads and get insights faster.\n AWS Glue is a serverless, scalable data integration service that makes it simple to discover, prepare, move, and integrate data from multiple sources. AWS Glue 5.0 upgrades the engines to Apache Spark 3.5.2, Python 3.11, and Java 17, with new performance and security improvements. Glue 5.0 updates open table format support to Apache Hudi 0.15.0, Apache Iceberg 1.6.1, and Delta Lake 3.2.0 so you can solve advanced use cases around performance, cost, governance, and privacy in your data lakes. AWS Glue 5.0 adds Spark native fine grained access control with AWS Lake Formation so you can apply table, column, row, and cell level permissions on Amazon S3 data lakes. Finally, Glue 5.0 adds support for Sagemaker Lakehouse to unify all your data across Amazon S3 data lakes and Amazon Redshift data warehouses. AWS Glue 5.0 is generally available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Frankfurt), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), and South America (São Paulo) regions. To learn more, visit the AWS Glue product page and documentation.

Amazon Bedrock Model Distillation is now available in preview

With Amazon Bedrock Model Distillation, customers can use smaller, faster, more cost-effective models that deliver use-case specific accuracy that is comparable to the most capable models in Amazon Bedrock.\n Today, fine-tuning a smaller cost-efficient model to increase its accuracy for a customers’ use-case is an iterative process where customers need to write prompts and response, refine the training dataset, ensure that the training dataset captures diverse examples, and adjust the training parameters. Amazon Bedrock Model Distillation automates the process needed to generate synthetic data from the teacher model, trains and evaluates the student model, and then hosts the final distilled model for inference. To remove some of the burden of iteration, Model Distillation may choose to apply different data synthesis methods that are best suited for your use-case to create a distilled model that approximately matches the advanced model for the specific use-case. For example, Bedrock may expand the training dataset by generating similar prompts or generate high-quality synthetic responses using customer provided prompt-response pairs as golden examples. Learn more in our documentation and blog.

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications

Amazon SageMaker Lakehouse and Amazon Redshift now support zero-ETL integrations from applications, automating the extraction and loading of data from eight applications, including Salesforce, SAP, ServiceNow, and Zendesk. As an open, unified, and secure lakehouse for your analytics and AI initiatives, Amazon SageMaker Lakehouse enhances these integrations to streamline your data management processes.\n These zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load valuable data from your customer support, relationship management, and ERP applications into your data lake and data warehouse for analysis. Zero-ETL integration reduces users’ operational burden and saves the weeks of engineering effort needed to design, build, and test data pipelines. By selecting a few settings in the no-code interface, you can quickly set up your zero-ETL integration to automatically ingest and continually maintain an up-to-date replica of your data in the data lake and data warehouse. Zero-ETL integrations help you focus on deriving insights from your application data, breaking down data silos in your organization and improving operational efficiency. Now run enhanced analysis on your application data using Apache Spark and Amazon Redshift for analytics or machine learning. Optimize your data ingestion processes and focus instead on analysis and gaining insights. 

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more, visit What is zero-ETL and What is AWS Glue.

AWS announces Amazon SageMaker Lakehouse

AWS announces Amazon SageMaker Lakehouse, a unified, open, and secure data lakehouse that simplifies your analytics and artificial intelligence (AI). Amazon SageMaker Lakehouse unifies all your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data.\n SageMaker Lakehouse gives you the flexibility to access and query your data in-place with Apache Iceberg open standard. All data in SageMaker Lakehouse can be queried from SageMaker Unified Studio (preview) and engines such as Amazon EMR, AWS Glue, Amazon Redshift or Apache Spark. You can secure your data in the lakehouse by defining fine-grained permissions, which are consistently applied across all analytics and ML tools and engines. With SageMaker Lakehouse, you can use your existing investments. You can seamlessly make data from your Redshift data warehouses available for analytics and AI/ML. In addition, you can now create data lakes by leveraging the analytics optimized Redshift Managed Storage (RMS). Bringing data into lakehouse is easy. You can use zero-ETL to bring data from operational databases, streaming services, and applications, or query in-place data via federated query. SageMaker Lakehouse is available in US East (N. Virginia), US East (Ohio), Europe (Ireland), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (London), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), South America (Sao Paulo).

SageMaker Lakehouse is accessible directly from SageMaker Unified Studio. In addition, you can access SageMaker Lakehouse from AWS Console, AWS Glue APIs and CLIs. To learn more, visit SageMaker Lakehouse and read the launch blog. For pricing information please visit here.

Amazon Q Developer announces automatic unit test generation to accelerate feature development

Today, Amazon Q Developer announces the general availability of a new agent that automates the process of generating unit tests. This agent can be easily initiated by using a simple prompt: “/test”. Once prompted, Amazon Q will use the knowledge of your project to automatically generate and add tests to your project, helping improve code quality, fast.\n Amazon Q Developer will also ask you to provide consent before adding tests, allowing you to always stay in the loop so that no unintended changes are made. Automation saves the time and effort needed to write comprehensive unit tests, allowing you to focus on building innovative features. With the ability to quickly add unit tests and increase coverage across code, organizations can safely and more reliably ship code, accelerating feature development across the software development lifecycle. Automatic unit test generation is generally available within the Visual Studio Code and JetBrains integrated development environments (IDEs) or in public preview as part of the new GitLab Duo with Amazon Q offering, in all AWS Regions where Amazon Q Developer is available. Learn more about unit test generation.

Introducing the next generation of Amazon SageMaker

Today, AWS announces the next generation of Amazon SageMaker, a unified platform for data, analytics, and AI. This launch brings together widely adopted AWS machine learning and analytics capabilities and provides an integrated experience for analytics and AI with unified access to data and built-in governance. Teams can collaborate and build faster from a single development environment using familiar AWS tools for model development, generative AI application development, data processing, and SQL analytics, accelerated by Amazon Q Developer, the most capable generative AI assistant for software development.\n The next generation of SageMaker also introduces new capabilities, including Amazon SageMaker Unified Studio (preview), Amazon SageMaker Lakehouse, and Amazon SageMaker Data and AI Governance. Within the new SageMaker Unified Studio, users can discover their data and put it to work using the best tool for the job across data and AI use cases. SageMaker Unified Studio brings together functionality and tools from the range of standalone studios, query editors, and visual tools available today in Amazon EMR, AWS Glue, Amazon Redshift, Amazon Bedrock, and the existing Amazon SageMaker Studio. SageMaker Lakehouse provides an open data architecture that reduces data silos and unifies data across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third party and federated data sources. SageMaker Lakehouse offers the flexibility to access and query data with Apache Iceberg–compatible tools and engines. SageMaker Data and AI Governance, including Amazon SageMaker Catalog built on Amazon DataZone, empowers users to securely discover, govern, and collaborate on data and AI workflows.  

For more information on AWS Regions where the next generation of Amazon SageMaker is available, see Supported Regions. 

To learn more and get started, visit the following resources:

Amazon SageMaker product page

Amazon SageMaker documentation

Amazon SageMaker in the AWS Management Console

Amazon Q Developer adds operational investigation capability (Preview)

Amazon Q Developer now helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. \n Amazon Q Developer works alongside you throughout your operational troubleshooting journey from issue detection and triaging, through remediation. You can initiate an investigation by selecting the Investigate action on any Amazon CloudWatch data widget across the AWS Management Console. You can also configure Amazon Q to automatically investigate when a CloudWatch alarm is triggered. When an investigation starts, Amazon Q Developer sifts through various signals about your AWS environment including CloudWatch telemetry, AWS CloudTrail Logs, deployment information, changes to resource configuration, and AWS Health events. 

CloudWatch now provides a dedicated investigation experience where teams can collaborate and add findings, view related signals and anomalies, and review suggestions for potential root cause hypotheses. This new capability also provides remediation suggestions for common operational issues across your AWS environment by surfacing relevant AWS Systems Manager Automation runbooks, AWS re:Post articles, and documentation. It also integrates with your existing operational workflows such as Slack via AWS Chatbot. 

The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, see getting started and best practice documentation.

Amazon EC2 Trn2 instances are generally available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn2 instances and preview of Trn2 UltraServers, powered by AWS Trainium2 chips. Available via EC2 Capacity Blocks, Trn2 instances and UltraServers are the most powerful EC2 compute solutions for deep learning and generative AI training and inference.\n You can use Trn2 instances to train and deploy the most demanding foundation models including large language models (LLMs), multi-modal models, diffusion transformers and more to build a broad set of AI applications. To reduce training times and deliver breakthrough response times (per-token-latency) for the most capable, state-of-the-art models you might need more compute and memory than a single instance can deliver. Trn2 UltraServers is a completely new EC2 offering that uses NeuronLink, a high-bandwidth, low-latency fabric, to connect 64 Trainium2 chips across 4 Trn2 instances into one node unlocking unparalleled performance. For inference, UltraServers help deliver industry-leading response times to create the best real-time experiences. For training, UltraServers boost model training speed and efficiency with faster collective communication for model parallelism as compared to standalone instances.

Trn2 instances feature 16 Trainium2 chips to deliver up to 20.8 petaflops of FP8 compute, 1.5 TB high bandwidth memory with 46 TB/s of memory bandwidth, and 3.2 Tbps of EFA networking. Trn2 UltraServers feature 64 Trainium2 chips to deliver up to 83.2 petaflops of FP8 compute, 6 TB of total high bandwidth memory with 185 TB/s of total memory bandwidth, and 12.8 Tbps of EFA networking. They both are deployed in EC2 UltraClusters to provide non-blocking, petabit scale-out capabilities for distributed training. Trn2 instances are generally available in the trn2.48xlarge size in the US East (Ohio) AWS Region through EC2 Capacity Blocks for ML.

To learn more about Trn2 instances and request access to Trn2 UltraServers please visit the Trn2 instances page.

Data Lineage is now generally available in Amazon DataZone and next generation of Amazon SageMaker

AWS announces general availability of Data Lineage in Amazon DataZone and next generation of Amazon SageMaker, a capability that automatically captures lineage from AWS Glue and Amazon Redshift to visualize lineage events from source to consumption. Being OpenLineage compatible, this feature allows data producers to augment the automated lineage with lineage events captured from OpenLineage-enabled systems or through API, to provide a comprehensive data movement view to data consumers.\n This feature automates lineage capture of schema and transformations of data assets and columns from AWS Glue, Amazon Redshift, and Spark executions in tools to maintain consistency and reduce errors. With in-built automation, domain administrators and data producers can automate capture and storage of lineage events when data is configured for data sharing in the business data catalog. Data consumers can gain confidence in an asset’s origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, the data lineage feature versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset’s or job’s history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets. The data lineage feature is generally available in all AWS Regions where Amazon DataZone and next generation of Amazon SageMaker are available. To learn more, visit Amazon DataZone and next generation of Amazon SageMaker.

Amazon Q in QuickSight unifies insights from structured and unstructured data

Now generally available, Amazon Q in QuickSight provides users with unified insights from structured and unstructured data sources through integration with Amazon Q Business. While structured data is managed in conventional systems, unstructured data such as document libraries, webpages, images and more has remained largely untapped due to its diverse and distributed nature.\n With Amazon Q in QuickSight business users can now augment insights from traditional BI data sources such as databases, data lakes and data warehouses, with contextual information from unstructured sources. Users can get augmented insights within QuickSight’s BI interface across multi-visual Q&A and Data Stories. Users can use multi-visual Q&A to ask questions in natural language and get visualizations and data summaries augmented with contextual insights from Amazon Q Business. With data stories in Amazon Q in QuickSight users can upload documents, or connect to unstructured data sources from Amazon Q Business to create richer narratives or presentations explaining their data with additional context. This integration enables organizations to harness insights from all their data without the need for manual collation, leading to more informed decision-making, time savings, and a significant competitive edge in the data-driven business landscape. This new capability is generally available to all Amazon QuickSight Pro Users in US East (N. Virginia), and US West (Oregon) AWS Regions. To learn more visit the AWS Business Intelligence Blog, the Amazon Q Business What’s New Post and try QuickSight free for 30-days.

Announcing GitLab Duo with Amazon Q (Preview)

Today, AWS announces a preview of GitLab Duo with Amazon Q, embedding advanced agent capabilities for software development and workload transformation directly in GitLab’s enterprise DevSecOps platform. With this launch, GitLab Duo with Amazon Q delivers a seamless development experience across tasks and teams, automating complex, multi-step tasks for software development, security, and transformation —all using the familiar GitLab workflows developers already know. \n Using GitLab Duo, developers can delegate issues to Amazon Q agents using quick actions. to build new features faster, maximize quality and security with AI-assisted code reviews, create and execute unit tests, and upgrade a legacy Java codebase. GitLab’s unified data store across the software development life cycle (SDLC) gives Amazon Q project context to accelerate and automate end-to-end workflows for software development, simplifying the complex toolchains historically required for collaboration across teams.

Streamline software development: Go from new feature idea in an issue, to merge-ready code in minutes. Iterate directly from GitLab, using feedback in comments to accelerate development workflows from end-to-end.

Optimize code: Generate unit tests for new merge request to save developer time and ensure consistent quality assurance practices are enforced across teams.

Maximize quality and security: Provide AI-driven code quality, security reviews and generated fixes to accelerate feedback cycles.

Transform enterprise workloads: Starting with Java 8 or 11 codebases, developers can upgrade to Java 17 directly from a GitLab project to improve application security, performance, and remove technical debt.

Visit the Amazon Q Developer integrations page to learn more.

Announcing the preview of Amazon SageMaker Unified Studio

Today, AWS announces the next generation of Amazon SageMaker, including the preview launch of Amazon SageMaker Unified Studio, an integrated data and AI development environment that enables collaboration and helps teams build data products faster. SageMaker Unified Studio brings together familiar tools from AWS analytics and AI/ML services for data processing, SQL analytics, machine learning model development, and generative AI application development. Amazon SageMaker Lakehouse, which is accessible through SageMaker Unified Studio, provides open source compatibility and access to data stored across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third- party and federated data sources. Enhanced governance features are built in to help you meet enterprise security requirements.\n SageMaker Unified Studio allows you to find, access, and query data and AI assets across your organization, then work together in projects to securely build and share analytics and AI artifacts, including data, models, and generative AI applications. SageMaker Unified Studio offers the capabilities to build integrated data pipelines with visual extract, transform, and load (ETL), develop ML models, and create custom generative AI applications. New unified Jupyter Notebooks enable seamless work across different compute resources and clusters, while an integrated SQL editor lets you query your data stored in various sources—all within a single, collaborative environment. Amazon Bedrock IDE, formerly Amazon Bedrock Studio, is now part of the SageMaker Unified Studio in public preview, offering the capabilities to rapidly build and customize generative AI applications. Amazon Q Developer, the most capable generative AI assistant for software development, is integrated into SageMaker Unified Studio to accelerate and streamline tasks across the development lifecycle.

For more information on AWS Regions where SageMaker Unified Studio is available in preview, see Supported Regions.

To get started, see the following resources:

SageMaker overview

SageMaker documentation

SageMaker in the AWS Management Console

Announcing Amazon S3 Tables – Fully managed Apache Iceberg tables optimized for analytics workloads

Amazon S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query throughput and up to 10x higher transactions per second compared to self-managed tables. With S3 Tables support for the Apache Iceberg standard, your tabular data can be easily queried by popular AWS and third-party query engines. Additionally, S3 Tables are designed to perform continual table maintenance to automatically optimize query efficiency and storage cost over time, even as your data lake scales and evolves. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight.\n S3 Tables introduce table buckets, a new bucket type that is purpose-built to store tabular data. With table buckets, you can quickly create tables and set up table-level permissions to manage access to your data lake. You can then load and query data in your tables with standard SQL, and take advantage of Apache Iceberg’s advanced analytics capabilities such as row-level transactions, queryable snapshots, schema evolution, and more. Table buckets also provide policy-driven table maintenance, helping you to automate operational tasks such as compaction, snapshot management, and unreferenced file removal.

Amazon S3 Tables are now available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Amazon Q Developer can now generate documentation within your source code

Starting today, Amazon Q Developer can document your code by automatically generating readme files and data-flow diagrams within your projects. \n Today, developers report they spend an average of just one hour per day coding. They spend most of their time on tedious, undifferentiated tasks such as learning codebases, writing and reviewing documentation, testing, managing deployments, troubleshooting issues or finding and fixing vulnerabilities. Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. With this new capability, Q Developer can help you understand your existing code bases faster, or quickly document new features, so you can focus on shipping features for your customers.

This capability is available in the integrated development environment (IDE) through a new chat command: /doc . You can get started generating documentation within the Visual Studio Code and IntelliJ IDEA IDEs with an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing.

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.

Announcing Amazon Bedrock IDE in preview as part of Amazon SageMaker Unified Studio

Today we are announcing the preview launch of Amazon Bedrock IDE, a governed collaborative environment integrated within Amazon SageMaker Unified Studio (preview) that enables developers to swiftly build and tailor generative AI applications. It provides an intuitive interface for developers across various skill levels to access Amazon Bedrock’s high-performing foundation models (FMs) and advanced customization capabilities in order to collaboratively build custom generative AI applications.\n Amazon Bedrock IDE’s integration into Amazon SageMaker Unified Studio removes barriers between data, tools, and builders, for generative AI development. Teams can now access their preferred analytics and ML tools alongside Amazon Bedrock IDE’s specialized tools for building generative AI applications. Developers can leverage Retrieval Augmented Generation (RAG) to create Knowledge Bases from their proprietary data sources, Agents for complex task automation, and Guardrails for responsible AI development. This unified workspace reduces complexity, accelerating the prototyping, iteration, and deployment of production-ready, responsible generative AI apps aligned with business needs. Amazon Bedrock IDE is now available in Amazon SageMaker Unified Studio and supported in 5 regions. For more information on supported regions, please refer to the Amazon SageMaker Unified Studio regions guide. Learn more about Amazon Bedrock IDE and its features by visiting the Amazon Bedrock IDE user guide and get started with Bedrock IDE by enabling a “Generative AI application development” project profile using this admin guide.

Announcing Amazon Nova foundation models available today in Amazon Bedrock

We’re excited to announce Amazon Nova, a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price performance. Amazon Nova models available today on Amazon Bedrock are:\n

Amazon Nova Micro, a text only model that delivers the lowest latency responses at very low cost.

Amazon Nova Lite, a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs

Amazon Nova Pro, a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks.

Amazon Nova Canvas, a state-of-the-art image generation model.

Amazon Nova Reel, a state-of-the-art video generation model.

Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are among the fastest and most cost-effective models in their respective intelligence classes. These models have also been optimized to make them easy to use and effective in RAG and agentic applications. With text and vision fine-tuning on Amazon Bedrock, you can customize Amazon Micro, Lite, and Pro to deliver the optimal intelligence, speed, and cost for your needs. With Amazon Nova Canvas and Amazon Nova Reel, you get access to production-grade visual content, with built-in controls for safe and responsible AI use like watermarking and content moderation. You can see the latest benchmarks and examples of these models on the Amazon Nova product page. Amazon Nova foundation models are available in Amazon Bedrock in the US East (N. Virginia) region. Amazon Nova Micro, Lite, and Pro models are also available in the US West (Oregon), and US East (Ohio) regions via cross-region inference. Learn more about Amazon Nova at the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. You can get started with Amazon Nova foundation models in Amazon Bedrock from the Amazon Bedrock console.

Amazon Q Business now provides insights from your databases and data warehouses (preview)

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application.\n Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data. With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights. This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Announcing Amazon Aurora DSQL (Preview)

Today, AWS announces the preview of Amazon Aurora DSQL, a new serverless, distributed SQL database with active-active high availability. Aurora DSQL allows you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resiliency effortless for your applications, and offers the fastest distributed SQL reads and writes.\n Aurora DSQL provides virtually unlimited horizontal scaling with the flexibility to independently scale reads, writes, compute, and storage. It automatically scales to meet any workload demand without database sharding or instance upgrades. Its active-active distributed architecture is designed for 99.99% single-Region and 99.999% multi-Region availability with no single point of failure, and automated failure recovery. This ensures that all reads and writes to any Regional endpoint are strongly consistent and durable. Aurora DSQL is PostgreSQL compatible, offering an easy-to-use developer experience.

Aurora DSQL is now available in preview in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). 

To learn more about Aurora DSQL features and benefits, check out the Aurora DSQL overview page and documentation. Aurora DSQL is available at no charge during preview. Get started in only a few steps by going to the Aurora DSQL console or using the Aurora DSQL API or AWS CLI.

Amazon Q Developer transformation capabilities for mainframe modernization are now available (Preview)

Today, AWS announces new generative AI–powered capabilities of Amazon Q Developer in public preview to help customers and partners accelerate large-scale assessment and modernization of mainframe applications.\n Amazon Q Developer is enterprise-ready, offering a unified web experience tailored for large-scale modernization, federated identity, and easier collaboration. Keeping you in the loop, Amazon Q Developer agents analyze and document your code base, identify missing assets, decompose monolithic applications into business domains, plan modernization waves, and refactor code. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives, source repository access, and project context. Amazon Q Developer agents autonomously classify and organize application assets and create comprehensive code documentation to understand and expand the knowledge base of your organization. The agents combine goal-driven reasoning using generative AI and modernization expertise to develop modernization plans customized for your code base and transformation objectives. You can then collaboratively review, adjust, and approve the plans through iterative engagement with the agents. Once you approve the proposed plan, Amazon Q Developer agents autonomously refactor the COBOL code into cloud-optimized Java code while preserving business logic.

By delegating tedious tasks to autonomous Amazon Q Developer agents with your review and approvals, you and your team can collaboratively drive faster modernization, larger project scale, and better transformation quality and performance using generative AI large language models. You can enhance governance and compliance by maintaining a well-documented and explainable trail of transformation decisions.

To learn more, read the blog and visit Amazon Q Developer transformation capabilities webpage and documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

AWS Contact Center

Containers

AWS Database Blog

AWS Machine Learning Blog

Networking & Content Delivery

AWS Security Blog

Open Source Project

AWS CLI

Amplify for JavaScript