3/4/2025, 12:00:00 AM ~ 3/5/2025, 12:00:00 AM (UTC)
Recent Announcements
Amazon Q Business now supports insights from audio and video data
Today, we are excited to announce that Amazon Q Business now supports the ingestion of audio and video data. This new feature enables Amazon Q customers to search through ingested audio and video content, allowing them to ask questions based on the information contained within these media files.\n This enhancement significantly expands the capabilities of Amazon Q Business, making it an even more powerful tool for organizations to access and utilize their multimedia content. Customers can unlock valuable insights from their audio and video resources. Users can now easily search for specific information within recorded meetings, training videos, podcasts, or any other audio or video content ingested into Amazon Q Business. This capability streamlines information retrieval, enhances knowledge sharing, and improves decision-making processes by making multimedia content as searchable and accessible as text-based documents. The audio and video ingestion feature uses the Bedrock Data Automation feature to process customer’s multimodal assets.The feature for Amazon Q Business is available in US East (N. Virginia) and US West (Oregon) AWS Regions. Customers can start using this feature in supported regions to enhance their organization’s knowledge management and information discovery processes. To get started with ingesting audio and video data in Amazon Q Business, visit the Amazon Q console or refer to the documentation. For more information about Amazon Q Business and its features, please visit the Amazon Q product page.
SageMaker Hyperpod Flexible Training Plans now supports instant start times and multiple offers
As of February 14, 2025, SageMaker Flexible Training Plans now supports instant start times that allow customers to book a plan starting as soon as the next 30 minutes.\n Amazon SageMaker‘s Flexible Training Plan (FTP) makes it easy for customers to access GPU capacity to run ML workloads. Customers who use Flexible Training Plans can plan their ML development cycles with confidence in knowing they’ll have the GPUs they need on a specific date for the amount of time they reserve. There are no long-term commitments, so customers get capacity assurance while only paying for the amount of GPU time necessary to complete their workloads.
With the ability to start a reservation within 30 minutes (subject to availability), Flexible Training Plan accelerates compute resource procurement for customers running machine learning workloads. The system first attempts to find a single, continuous block of reserved capacity that precisely matches a customer’s requirement. If a continuous block isn’t available, SageMaker automatically splits the total duration across two time segments and attempts to fulfill the request using two separate reserved capacity blocks. Additionally, with this release, Flexible Training Plan will return up to three distinct options, providing flexibility in compute resource procurement.
You can create a Training Plan using either the SageMaker AI console or programmatic methods. The SageMaker AI console offers a visual, graphical interface with a comprehensive view of your options, while programmatic creation can be done using the AWS CLI or SageMaker SDKs to interact directly with the training plans API. You can get started with the API experience here.
Amazon Lex launches support for Confirmation and Alphanumeric slot types for Korean
Amazon Lex now supports Confirmation and Alphanumeric slot types in Korean (ko-KR) locale. These built-in slot types help developers build more natural and efficient conversational experiences in Korean language applications.\n The Confirmation slot type automatically resolves various Korean expressions into ‘Yes’, ‘No’, ‘Maybe’, and ‘Don’t know’ values, eliminating the need for custom slots with multiple synonyms. The Alphanumeric slot type enables capturing combinations of letters and numbers, with support for regular expressions to validate specific formats, making it easier to collect structured data like identification numbers or reference codes. Korean support for these slot types is available in all AWS regions where Amazon Lex V2 operates. To learn more about implementing these features, visit the Amazon Lex documentation for Custom Vocabulary and Alphanumerics.
AWS Secrets Manager increases the API Requests per Second limits
AWS Secrets Manager now supports higher request rates for the core set of API operations: GetSecretValue and DescribeSecret. GetSecretValue now supports up to 10,000 requests per second and DescribeSecret supports 40,000 requests per second. The increased API limits are available at no additional cost and will automatically be applied to your AWS accounts. No further action required on your end.\n Increased API limits for GetSecretValue and DescribeSecret are available in all regions where the service operates. For a list of regions where Secrets Manager is available, see the AWS Region table. To learn more about Secrets Manager API operations, visit our API reference.
AWS Transfer Family announces reduced login latency for SFTP servers
AWS Transfer Family has reduced the service side login latency from 1-2 seconds to under 500 milliseconds.\n AWS Transfer Family offers fully managed support for the transfer of files over SFTP, AS2, FTPS, FTP, and web browser-based transfers directly into and out of AWS storage services. With this launch, you benefit from significantly reduced latency from the service to initiate the transfer over SFTP. This optimization offers substantial benefits, particularly for high-frequency, low-latency use cases with automated processes or applications requiring rapid file operations. Reduced server-side login latency is immediately available at no additional cost for all new and existing Transfer Family SFTP servers in all AWS Regions where the service is available. To create an SFTP server, visit the Transfer Family User Guide.
AWS Lambda adds support for Amazon CloudWatch Logs Live Tail in VS Code IDE
AWS Lambda now supports Amazon CloudWatch Logs Live Tail in VS Code IDE through the AWS Toolkit for Visual Studio Code. Live Tail is an interactive log streaming and analytics capability which provides real-time visibility into logs, making it easier to develop and troubleshoot Lambda functions.\n We previously announced support for Live Tail in the Lambda console, enabling developers to view and analyze Lambda logs in real time. Now, with Live Tail support in VS Code IDE, developers can monitor Lambda function logs in real time while staying within their development environment, eliminating the need to switch between multiple interfaces for coding and log analysis. This makes it easier for developers to quickly test and validate code or configuration changes in real time, accelerating the author-test-deploy cycle when building applications using Lambda. This integration also makes it easier to detect and debug failures and critical errors in Lambda function code, reducing the mean time to recovery (MTTR) when troubleshooting Lambda function errors. Using Live Tail for Lambda in VS Code IDE is straightforward. After installing the latest version of the AWS Toolkit for Visual Studio Code, developers can access Live Tail through the AWS Explorer panel. Simply navigate to the desired Lambda function, right-click, and select “Tail Logs” to begin streaming logs in real time. To learn more about using Live Tail for Lambda in VS Code IDE, visit the AWS Toolkit developer guide. To learn more about CloudWatch Logs Live Tail, visit CloudWatch Logs developer guide.
Amazon Neptune Database is now available in AWS Asia Pacific (Malaysia) Region
Amazon Neptune Database is now available in the Asia Pacific (Malaysia) Region on engine versions 1.4.3.0 and later. You can now create Neptune clusters using R6g, R6i, T4g, and T3 instance types in the AWS Asia Pacific (Malaysia) Region.\n Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production. To get started, you can create a new Neptune cluster using the AWS Management Console, AWS CLI, or a quickstart AWS CloudFormation template. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.
AWS CodeBuild now supports non-container builds in on-demand fleets
AWS CodeBuild now supports non-container builds on Linux x86, Arm, and Windows on-demand fleets. You can run build commands directly on the host operating system without containerization. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.\n With non-container builds, you can execute build commands that require direct access to the host system resources or have specific requirements that make containerization challenging. This feature is particularly useful for scenarios such as building device drivers, running system-level tests, or working with tools that require host machine access. The non-container feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. To learn more about non-container builds, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.
Amazon S3 Tables are now available in three additional AWS Regions
Amazon S3 Tables are now available in three additional AWS Regions: Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Sydney).\n S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query performance through continual table optimization compared to unmanaged Iceberg tables, and up to 10x higher transactions per second compared to Iceberg tables stored in general purpose S3 buckets. You can use S3 Tables with AWS analytics services through the preview integration with Amazon SageMaker Lakehouse, as well as Apache Iceberg-compatible open source engines like Apache Spark and Apache Flink. Additionally, S3 Tables perform continual table maintenance to automatically expire old snapshots and related data files to reduce storage cost over time. S3 Tables are now generally available in eleven AWS Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.
YouTube
AWS Black Belt Online Seminar (Japanese)
AWS Blogs
AWS Japan Blog (Japanese)
- Amazon Connect Update Summary — February 2025
- Introducing AWS IoT Device Management Managed Integration (preview)
AWS News Blog
AWS Compute Blog
AWS Developer Tools Blog
AWS HPC Blog
- Enhancing Equity Strategy Backtesting with Synthetic Data: An Agent-Based Model Approach – part 2
- Enhancing Equity Strategy Backtesting with Synthetic Data: An Agent-Based Model Approach
AWS for Industries
AWS Machine Learning Blog
- Accelerate AWS Well-Architected reviews with Generative AI
- Dynamic metadata filtering for Amazon Bedrock Knowledge Bases with LangChain