6/25/2025, 12:00:00 AM ~ 6/26/2025, 12:00:00 AM (UTC)

Recent Announcements

Amazon SageMaker now supports automatic synchronization from Git to S3

Today, AWS announces a new feature for Amazon SageMaker Unified Studio that automatically synchronizes files from project Git repositories to Amazon Simple Storage Service (Amazon S3) buckets. Amazon SageMaker Unified Studio is a single data and AI development environment that brings together functionality and tools from AWS Analytics and AI/ML services services. It facilitates building, deploying, executing, and monitoring workflows from a single interface.\n Automatic synchronization keeps production environments in sync with code changes, eliminating manual intervention and streamlining developers’ workflows. This is particularly valuable for developers using the unified scheduling for visual extract, transform, load (ETL) flows and SQL query books, where having the latest code artifacts readily available in Amazon S3 buckets is crucial for successful execution. This new feature is now available in all AWS regions where Amazon SageMaker Unified Studio is available. Access the supported region list for the most up-to-date availability information. To learn more, visit our Amazon SageMaker documentation.

Amazon FSx for OpenZFS now supports Amazon S3 access

You can now attach Amazon Simple Storage Service (Amazon S3) Access Points to your Amazon FSx for OpenZFS file systems so that you can access your file data as if it were in S3. With this new capability, your file data in FSx for OpenZFS is accessible for use with the broad range of artificial intelligence, machine learning, and analytics services and applications that work with S3 while your file data continues to reside on the FSx for OpenZFS file system.\n An S3 Access Point is an endpoint that helps control and simplify how different applications or users can access data. S3 Access Points now work with FSx for OpenZFS so that applications and services can access file data in FSx for OpenZFS using the S3 API and as if the data were in S3. You can discover new insights, innovate faster, and make even better data-driven decisions with your data in FSx for OpenZFS. For example, you can use your file data to augment generative AI applications with Amazon Bedrock, train machine learning models with Amazon SageMaker, run analyses using Amazon Glue, a wide range of AWS Data and Analytics Competency Partner solutions, and S3-based cloud-native applications.

Get started with this capability by creating and attaching S3 Access Points to your FSx for OpenZFS file systems using the Amazon FSx console, the AWS Command Line Interface (AWS CLI), or the AWS Software Development Kit (AWS SDK) in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, Ireland, Stockholm), and Asia Pacific (Hong Kong, Singapore, Sydney, Tokyo). To learn more, visit the product page, user guide, and AWS News Blog.

Amazon Bedrock Flows announces preview of persistent long-running execution and inline-code support

Amazon Bedrock Flows enables you to link foundation models (FMs), Amazon Bedrock Prompts, Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails and other AWS services together to build and scale pre-defined generative AI workflows. Today, we announce the preview of persistent execution for long running workflows, and inline-code execution support within the flows.\n Bedrock Flows customers currently encounter three key limitations when authoring, executing and monitoring workflows: a two-minute idle timeout restriction per step, the need for custom API-based monitoring solutions, and the requirement to create Lambda functions for basic data processing tasks. Starting today, we’re addressing these challenges with new preview features that extend workflow step execution times to 15 minutes. The new capabilities include built-in execution tracking directly in the AWS Management Console, eliminating the need for custom monitoring code. You can now execute Python scripts using the new inline-code node type, removing the overhead of setting up Lambda functions for simple data processing. These enhancements significantly streamline workflow development and management in Amazon Bedrock Flows, helping you focus on building your generative AI applications.

Long-running Flows executions are now available in all supported commercial regions where Flows operates. The inline code node is available in US East (N. Virginia), US West (Oregon), and Europe (Frankfurt). To get started, see the AWS user guide.

Amazon S3 Tables are now available in two additional AWS Regions

Amazon S3 Tables are now available in two additional AWS Regions: Asia Pacific (Thailand) and Mexico (Central). S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale.\n With this expansion, S3 Tables are now generally available in thirty-two AWS Regions. To learn more, visit the product page, documentation, and the S3 pricing page.

Amazon EC2 C7g instances are now available in the AWS Israel (Tel Aviv) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g instances are available in the AWS Israel (Tel Aviv) Region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.\n Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS). To learn more, see Amazon EC2 C7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

AWS Glue enables enhanced Apache Spark capabilities for AWS Lake Formation tables with full table access

AWS Glue now supports read and write operations from AWS Glue 5.0 Apache Spark jobs on AWS Lake Formation registered tables when the job role has full table access. This capability enables Data Manipulation Language (DML) operations including CREATE, ALTER, DELETE, UPDATE, and MERGE INTO statements on Apache Hive and Iceberg tables from within the same Apache Spark application.\n While Lake Formation’s fine-grained access control (FGAC) offers granular security controls at row, column, and cell levels, many ETL workloads simply need full table access. This new feature enables AWS Glue 5.0 Spark jobs to directly read and write data when full table access is granted, removing limitations that previously restricted certain Extract, Transform, and Load (ETL) operations. You can now leverage advanced Spark capabilities including Resilient Distributed Datasets (RDDs), custom libraries, and User Defined Functions (UDFs) with Lake Formation tables. Additionally, data teams can run complex, interactive Spark applications through SageMaker Unified Studio in compatibility mode while maintaining Lake Formation’s table-level security boundaries. This feature is available in all AWS Regions where AWS Glue and AWS Lake Formation are supported. To learn more, visit the AWS Glue product page and documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

AWS Database Blog

Desktop and Application Streaming

AWS for Industries

Artificial Intelligence

AWS Quantum Technologies Blog

Open Source Project

AWS CLI