7/9/2024, 12:00:00 AM ~ 7/10/2024, 12:00:00 AM (UTC)

Recent Announcements

Announcing the next generation of Amazon FSx for NetApp ONTAP file systems

Today, we’re announcing next-generation Amazon FSx for NetApp ONTAP file systems that provide higher scalability and enhanced flexibility compared to previous-generation file systems. Previous-generation file systems consisted of a single highly-available (HA) pair of file servers with up to 4 GBps of throughput. Next-gen file systems can be created or expanded with up to 12 HA pairs, allowing you to scale up to 72 GB/s of total throughput (up to 6 GBps per pair), giving you the flexibility to scale performance and storage to meet the needs of your most demanding workloads.\n With next-gen FSx for ONTAP file systems, a single HA pair can now deliver up to 6 GBps of throughput, providing workloads running on a single HA even more room to grow. However, customers with the most compute-intensive workloads need the higher throughput provided by a file system with multiple HA pairs. Before today, these customers could create a file system with multiple HA pairs but couldn’t add HA pairs or adjust its throughput at a later time. Now, next-gen file systems allow you to add HA pairs and adjust their throughput capacity, giving you additional flexibility to optimize your workload’s performance over time. Next-gen file systems are available today in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Europe (Ireland), and Asia-Pacific (Sydney). You can create next-gen Multi-AZ file systems with a single HA pair, and Single-AZ file systems with up to 12 HA pairs. To learn more, visit the FSx for ONTAP user guide.

Amazon FSx for NetApp ONTAP now supports NVMe-over-TCP for simpler, lower-latency shared block storage

Amazon FSx for NetApp ONTAP, a service that provides fully managed shared storage built on NetApp’s popular ONTAP file system, today announced support for the NVMe-over-TCP (NVMe/TCP) block storage protocol. Using NVMe/TCP, you can accelerate your block storage workloads such as databases and Virtual Desktop Infrastructure (VDI) with lower latency compared to traditional iSCSI block storage, and simplify multi-path IO (MPIO) configuration relative to iSCSI.\n FSx for ONTAP provides you with multi-protocol access to fully managed shared storage, including the iSCSI protocol for deploying applications such as databases and VDI that rely on shared block storage. NVMe/TCP is an implementation of the NVMe protocol that transports data over TCP using traditional Ethernet as a fabric. With this launch, you have the option of using NVMe/TCP to provide shared block storage for these applications in order to take advantage of NVMe/TCP’s lower latency and simplified setup. NVMe/TCP is available on all second-generation Amazon FSx for ONTAP file systems in all AWS Regions where they’re available. To learn more, visit the FSx for ONTAP user guide.

AWS Glue Data catalog now supports generating statistics for Apache Iceberg tables

AWS Glue Data Catalog now supports generating column-level aggregated statistics for Apache Iceberg tables. These statistics are now integrated with cost-based optimizer (CBO) from Amazon Redshift Spectrum, resulting in improved query performance and potential cost savings.\n Apache Iceberg support statistics such as nulls, min, max, but lacks support for generating aggregation statistics such as number of distinct values (NDV). With this launch, you now have integrated end-to-end experience where NDVs are collected on columns of Apache Iceberg table and stored in Apache Iceberg Puffin files. Amazon Redshift use these aggregation statistics to optimize queries by applying the most restrictive filters as early as possible in the query processing, thereby limiting memory usage and the number of records read to provide the query results. To get started, you can generate statistics for an Apache Iceberg table using AWS Glue Console or AWS Glue APIs. With each run, Glue Catalog will compute statistics for current Iceberg table snapshot, store in an Iceberg puffin file and Glue Catalog. As you run queries from Amazon Redshift Spectrum, you will automatically get the query performance improvements with built-in integration with Apache Iceberg.

Amazon FSx for NetApp ONTAP now allows you to read data during backup restores

Amazon FSx for NetApp ONTAP, a fully managed shared storage service built on NetApp’s popular ONTAP file system, now allows you to read data from a volume while it is being restored from a backup. The feature “read-access during backup restores” allows you to improve Recovery Time Objectives by up to 17x for read-only workloads that rely on backup restores for business continuity, such as media streaming and compliance verification.\n You can restore an FSx for ONTAP backup into a new volume at any time. Before today, when you restored a backup, Amazon FSx provided read-write access to data once the backup was fully downloaded onto the volume. The restore process typically took minutes to hours—depending on the backup size. Starting today, Amazon FSx enables read access to data within minutes of initiating a restore, enabling you to browse through your backup and retrieve critical data to resume operations faster in the event of accidental data modification or deletion. The volume becomes writable automatically once data has been fully restored. Now, you can reduce time to recover media streaming applications when the primary volume becomes unavailable by serving reads from a volume being restored, and compliance teams can initiate audits sooner by accessing data without waiting for the restore to complete. This feature is available on all new and existing FSx for ONTAP second-generation file systems in all AWS Regions where FSx for ONTAP second-generation file systems are available. See the FSx for ONTAP product documentation for more details.

Amazon SageMaker introduces a new generative AI inference optimization capability

Today, Amazon SageMaker announced general availability of a new inference capability that delivers up to ~2x higher throughput while reducing costs by up to ~50% for generative AI models such as Llama 3, Mistral, and Mixtral models. For example, with a Llama 3-70B model, you can achieve up to ~2400 tokens/sec on a ml.p5.48xlarge instance v/s ~1200 tokens/sec previously without any optimization.\n With this new capability, customers can choose from a menu of the latest model optimization techniques, such as speculative decoding, quantization, and compilation, and apply them to their generative AI models. SageMaker will do the heavy lifting of provisioning required hardware to run the optimization recipe, along with deep learning frameworks and libraries. Customers get out-of-the-box support for a speculative decoding solution from SageMaker that has been tested for performance at scale for various popular open source models, or they can bring their own speculative decoding solution. For quantization, SageMaker ensures compatibility and support for precision types on different model architectures. For compilation, the runtime infrastructure of SageMaker ensures efficient loading and caching of optimized models to reduce auto-scaling time. Customers can leverage this new capability from AWS SDK for Python (Boto3), SageMaker Python SDK, or the AWS Command Line Interface (AWS CLI). This capability is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (Sao Paulo) Regions.

AWS Glue Studio now offers a no code data preparation authoring experience

Today, AWS Glue Studio Visual ETL announces general availability of data preparation authoring, a new no code data preparation user experience for business users and data analysts with a spreadsheet-style UI that runs data integration jobs at scale on AWS Glue for Spark. The new visual data preparation experience makes it easier for data analysts and data scientists to clean and transform data to prepare it for analytics and machine learning (ML). Within this new experience, you can choose from hundreds of prebuilt transformations to automate data preparation tasks, all without the need to write any code.\n Business analysts can now collaborate with data engineers to build data integration jobs. Data engineers can use Glue Studio Visual flow-based view to define connections to the data and set the ordering of the data flow process, while business analysts can use the data preparation experience to define the data transformation and output. Additionally, DataBrew customers can import their existing data cleansing and preparation “recipes” to the new AWS Glue data preparation experience and continue to author them directly in AWS Glue Studio and scale up recipes to process petabytes of data, and at the lower price point for AWS Glue jobs. The feature is available in all commercial AWS Regions where AWS Glue DataBrew is available. To learn more, refer to the documentation and read the blog post.

Amazon RDS Data API for Aurora PostgreSQL is now available in 10 additional AWS regions

RDS Data API for Aurora Serverless v2 and Aurora provisioned PostgreSQL-Compatible database instances is now available in Asia Pacific (Sydney), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Europe (Ireland), Europe (London), Europe (Paris), US West (N. California), US East (Ohio), Canada (Central). RDS Data API allows you to access these Aurora clusters via a secure HTTP endpoint and run SQL statements without the use of database drivers and without managing connections.\n Data API eliminates the use of drivers and improves application scalability by automatically pooling and sharing database connections (connection pooling) rather than requiring customers to manage connections. Customers can call Data API via AWS SDK and CLI. Data API also enables access to Aurora databases via AWS AppSync GraphQL APIs. API commands supported in the Data API for Aurora Serverless v2 and Aurora provisioned are backwards compatible with Data API for Aurora Serverless v1 for easy customer application migrations. Data API supports Aurora PostgreSQL 15.3, 14.8, 13.11 and higher versions. Customers currently using Data API for ASv1 are encouraged to migrate to ASv2 to take advantage of the new Data API. To learn more, read the documentation.

Amazon MWAA now supports Apache Airflow version 2.9

You can now create Apache Airflow version 2.9 environments on Amazon Managed Workflows for Apache Airflow (MWAA). Apache Airflow 2.9 is the latest minor release of the popular open-source tool that helps customers author, schedule, and monitor workflows.\n Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. Apache Airflow 2.9 introduces several notable enhancements, such as new API endpoints for improved dataset management, custom names in dynamic task mapping for better readability, and advanced scheduling options including conditional expressions for dataset dependencies and the combination of dataset and time-based schedules.

Amazon EC2 R8g instances powered by AWS Graviton4 now generally available

AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) R8g instances. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.\n AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Elastic Fabric Adapter (EFA) networking support is offered on 24xlarge, 48xlarge, and bare metal sizes, and Elastic Network Adapter (ENA) Express support is available on instance sizes larger than 12xlarge.

Amazon OpenSearch Service announces Natural Language Query Generation for log analysis

Amazon OpenSearch Service has added support for AI powered Natural Language Query Generation in OpenSearch Dashboards Log Explorer. With Natural Language Query Generation, you can accelerate analysis by asking log exploration questions in plain English, which are then automatically translated to the relevant Piped Processing Language (PPL) queries and executed to fetch the requested data.\n With this new natural language support, you can get started quickly with log analysis without first having to be proficient in PPL. Further, it opens up log analysis to a wider set of team members who can simply explore their log data by asking questions like - “show me the count of 5xx errors for each of the pages on my website” or “show me the throughput by hosts”. This also helps advanced users in constructing complex queries by allowing for iterative refinement of both the natural language questions and the generated PPL. This feature is available at no cost for customers running managed clusters with OpenSearch 2.13 or above in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), AWS GovCloud (US-East), and AWS GovCloud (US-West).

AWS Partner Central now supports multi-factor authentication

AWS Partner Central now supports multi-factor authentication (MFA) capabilities at login. Users will be prompted to enter a one-time passcode sent to their registered e-mail address along with login credentials to confirm their identify.\n MFA adds an additional layer of protection, reducing the risk of unauthorized access to AWS Partner Central. Additionally, it ensures only active users are able to access AWS Partner Central, as the registered email address must be accessible. AWS Partners will be automatically enrolled in MFA, but alliance leads and cloud admins have the ability to disable the feature if desired for all AWS Partner Central users. To learn more, visit the AWS Partner Central Getting Started Guide.

Simplified service terms for AWS Marketplace sellers

AWS Partners can now register as sellers in AWS Marketplace with a simplified one-click experience. We have removed the need for AWS partners to review and accept a separate set of terms to sell in AWS Marketplace by including AWS Marketplace Terms into AWS service terms. Instead, partners simply sign in to their AWS account and click to register as an AWS Marketplace seller in the AWS Marketplace management portal.\n AWS Partners such as independent software vendors (ISVs), data providers, and consulting partners can sell their software, services, and data in AWS Marketplace to AWS customers. AWS Marketplace, jointly with AWS Partner Network (APN), helps ISVs and consulting partners to build, market, and sell their AWS offerings by providing valuable business, technical, and marketing support. AWS Marketplace is available to customers globally. Partners can discover the benefits of becoming an AWS Marketplace seller and get started on their AWS Marketplace journey. To learn more, review the new simplified Terms for selling in AWS Marketplace.

Amazon EventBridge Schema Registry now supports AWS PrivateLink VPC endpoints

Amazon EventBridge Schema Registry now supports AWS PrivateLink, allowing you to access the registry from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet. With today’s launch, you can leverage EventBridge Schema Registry features from a private subnet without the need to deploy an internet gateway, configure firewall rules, or set up proxy servers.\n Amazon EventBridge lets you use events to connect application components, making it easier to build scalable event-driven applications. EventBridge Schema Registry allows you to centrally store schemas, representing the structure of your events, so other teams can discover and consume them. You can add schemas to the registry yourself or use the Schema Discovery feature to capture the schemas of events sent to an EventBridge Event Bus. Once schemas are in your registry, you can download code bindings for those schemas in Java, Python, TypeScript, and Golang and use them in your preferred Integrated Development Environment (IDE) to take advantage of IDE features such as code validation and auto-completion.

Amazon FSx for OpenZFS introduces a highly available Single-AZ deployment option

Amazon FSx for OpenZFS now supports highly available (HA) Single-AZ deployments, offering high availability and consistent sub-millisecond latencies for use cases like data analytics, machine learning, and semiconductor chip design that can benefit from high availability but do not require multi-zone resiliency. Single-AZ HA file systems provide a lower-latency and lower-cost storage option than Multi-AZ file systems for these use cases, while offering all the same data management capabilities and features.\n Before today, FSx for OpenZFS offered Single-AZ non-HA file systems, which provide sub-millisecond read and write latencies, and Multi-AZ file systems, which provide high availability and durability by replicating data synchronously across AZs. With Single-AZ HA file systems, customers can now achieve both high availability and consistent sub-millisecond latencies at a lower cost relative to Multi-AZ file systems for workloads such as data analytics, machine learning, and semiconductor chip design that do not need multi-zone resiliency because they’re operating on a secondary copy of the data or data that can be regenerated. You can create Single-AZ HA file systems in the following AWS regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Stockholm), Middle East (Bahrain). To learn more about Single-AZ HA file systems, please visit FSx for OpenZFS documentation.

Amazon Q Business now provides responses that are personalized to users

Amazon Q is excited to announce personalization capabilities in Q Business, that help customers further increase employee productivity by considering their user profile to provide more useful responses. Q uses information such as an employee’s location, department and role to improve the relevance of responses.\n Q Business’ personalization capabilities are automatically enabled and will use your enterprise’s employee profile data to improve their user experience, with no additional set-up needed. Q receives employee profile information from your organization’s identity provider that you have connected to AWS IAM Identity Center. Amazon Q Business revolutionizes the way that employees interact with organizational knowledge and enterprise systems. It helps users get comprehensive answers to complex questions and take actions in a unified, intuitive web-based chat experience—all using an enterprise’s existing content, data, and systems. Personalization capability is available in all AWS Regions where Q Business is available. For more information, see Amazon Q Business User Guide.

Announcing Playlist page for PartyRock

Today, PartyRock is announcing a Playlist page to help you showcase a collection of PartyRock apps, curated by you. Everyone can build, use, and share generative AI powered apps using PartyRock, which uses foundation models from Amazon Bedrock.\n On November 26th, 2023, we announced a Discover page to showcase top community-created PartyRock apps. With this release, you can now add apps to a personalized Playlist page, making it convenient for others to view and use your apps. Previously, PartyRock apps were available in two modes: Private, where only you could view, use, and edit your apps, and Shared using links, where you could share links with anyone to view and use your apps. Starting today, you have an additional mode of making your apps Public, where they are automatically displayed on your Playlist page, making it easy for anyone to view and use your apps. Set up your playlist by navigating to the Playlist page from the side navigation bar on PartyRock. Here, you can review your current apps and add them to your playlist. Once created, your playlist will be available at https://partyrock.aws/u/. With playlists, also comes ‘app views’ displaying the number of times other users viewed or used your apps, whether via a shared link or directly from your Playlist page.

For a limited time, AWS offers new PartyRock users a free trial without the need to provide a credit card or sign up for an AWS account. To get hands-on with generative AI, visit PartyRock.

Amazon S3 Express One Zone now supports logging of all events in AWS CloudTrail

With Amazon S3 Express One Zone support for logging of all data plane API actions in AWS CloudTrail, you can get details on who made API calls to S3 Express One Zone and when API calls were made, thereby enhancing data visibility for governance, compliance, and operational auditing. Now, you can use AWS CloudTrail to log S3 Express One Zone object-level activity such as PutObject and GetObject, in addition to directory-bucket level actions such as CreateBucket and DeleteBucket that were already supported.\n With logging of all events in AWS CloudTrail, you can quickly determine which S3 Express One Zone objects were created, read, updated or deleted and identify the source of the API calls. If you detect unauthorized S3 Express One Zone object access, you can take immediate action to restrict access. In addition, you can use CloudTrail features such as advanced event selectors for granular control over which events are logged and CloudTrail integration with Amazon EventBridge to create rule-based workflows for event-driven architectures. You can enable AWS CloudTrail data events logging for S3 Express One Zone in all AWS Regions where S3 Express One Zone is available. Get started with CloudTrail event logging for S3 Express One Zone by using the CloudTrail console, AWS CLI, or AWS SDKs. For pricing information, visit the CloudTrail pricing page. To learn more, see the S3 User Guide, S3 Express One Zone product page and the AWS News Blog.

Amazon OpenSearch Serverless expands support for time-series workloads up to 30TB

We are excited to announce that Amazon OpenSearch Serverless now supports workloads up to 30TB of data for time-series collections. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple for you to run search and analytics workloads without having to think about infrastructure management. With the support for larger datasets, OpenSearch Serverless now enables more data-intensive use cases such as log analytics, security analytics, real-time application monitoring, and more.\n OpenSearch Serverless’ compute capacity used for indexing and search are measured in OpenSearch Compute Units (OCUs). To accommodate for larger datasets, OpenSearch Serverless now allows customers to independently scale indexing and search operations to use up to 500 OCUs. In addition, the release brings in a new data hydration mechanism that improves scaling and lowering query latency. You configure the maximum OCU limits on search and indexing independently to manage costs. You can also monitor real-time OCU usage with CloudWatch metrics to gain a better perspective on your workload’s resource consumption.

Announcing Valkey GLIDE, an open source client library for Valkey and Redis open source

Today, we’re introducing Valkey General Language Independent Driver for the Enterprise (GLIDE), an open source Valkey client library. Valkey is an open source key-value data store that supports a variety of workloads such as caching, and message queues. Valkey GLIDE is one of the official client libraries for Valkey and it supports all Valkey commands. GLIDE supports Valkey 7.2 and above, and Redis open source 6.2, 7.0, and 7.2. Application programmers can use GLIDE to safely and reliably connect their applications to services that are Valkey- and Redis OSS-compatible.\n Valkey GLIDE is designed for reliability, optimized performance, and high-availability, for Valkey- and Redis OSS- based applications. It is supported by AWS, and is preconfigured with best practices learned from over a decade of operating Redis OSS-compatible services used by thousands of customers. To help ensure consistency in application development and operations, GLIDE is implemented using a core driver framework, written in Rust, with language specific extensions. This design ensures consistency in features across languages, and reduces overall complexity. In this release, GLIDE is available for Java and Python, with support for additional languages actively under development. Valkey GLIDE is open source, permissively licensed (Apache 2.0 license), and can be used with any Valkey- or Redis OSS-compatible distribution supporting version 6.2, 7.0, and 7.2, including Amazon ElastiCache and Amazon MemoryDB. You can get started by downloading it from the major open source package managers. Learn more about it in the blog post, and submit contributions on the Valkey GLIDE GitHub repository.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Cloud Operations & Migrations Blog

AWS Big Data Blog

AWS Contact Center

AWS Database Blog

AWS for Industries

AWS Machine Learning Blog

AWS for M&E Blog

AWS Security Blog

AWS Storage Blog

Open Source Project

AWS CLI

Amplify for JavaScript

Firecracker