6/27/2024, 12:00:00 AM ~ 6/28/2024, 12:00:00 AM (UTC)
Recent Announcements
AWS Backup support for Amazon S3 is now available in AWS Canada West (Calgary) Region
Today, we are announcing the availability of AWS Backup support for Amazon S3 in Canada West (Calgary) Region. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon S3 along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.\n With this launch, AWS Backup support for Amazon S3 is available in all AWS commercial, AWS China, and AWS GovCloud (US) Regions where AWS Backup is available. For more information on regional availability and pricing, see AWS Backup pricing page. To learn more about AWS Backup support for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
Amazon QuickSight simplifies building pixel-perfect reports with Repeating Sections
Today, Amazon QuickSight announces the addition of Repeating Sections capability within Pixel-perfect reports. The new feature gives QuickSight Authors the ability to configure report sections to automatically repeat based on the values of one or more dimensions in their data.\n When defining a repeating section, QuickSight users can select which dimension(s) the section should repeat for, such as state, country, or product category. The section will then dynamically generate a copy for each unique value in the selected dimension(s). For example, a section could repeat once for each state so that separate charts and text are generated specifically for California, Texas, New York, and other states. Repeating sections make it easy to automatically generate customized views of data across different groups or categories with minimal effort.
Amazon DataZone introduces API-driven, OpenLineage-compatible data lineage visualization in preview
Amazon DataZone introduces data lineage in preview, helping customers visualize lineage events from OpenLineage-enabled systems or through API and trace data movement from source to consumption. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls.\n Amazon DataZone’s data lineage feature captures and visualizes the transformations of data assets and columns, providing a view into the data movement from source to consumption. Using Amazon DataZone’s OpenLineage-compatible API, domain administrators and data producers can capture and store lineage events beyond what is available in Amazon DataZone, including transformations in Amazon S3, AWS Glue, and other services. Data consumers in Amazon DataZone can gain confidence in an asset’s origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, Amazon DataZone versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset’s or job’s history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets.
Amazon Managed Service for Apache Flink now supports Apache Flink 1.19
Amazon Managed Service for Apache Flink now supports Apache Flink 1.19. This version includes new capabilities in the SQL API such as state TTL configuration and session window support. Flink 1.19 also includes Python 3.11 support, trace reporters for job restarts and checkpointing, and more. You can use in-place version upgrades for Apache Flink to adopt the Apache Flink 1.19 runtime for a simple and faster upgrade to your existing application.\n Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors. Create or update an Amazon Managed Service for Apache Flink application in the Amazon Managed Service for Apache Flink console.
Amazon IVS Real-Time Streaming now supports up to 25,000 viewers
The Amazon IVS Real-Time Streaming capability subscriber limit can now be raised beyond the default of 10,000 per stage in an AWS Region. You can request an increase for up to 25,000 subscribers per stage. With this enhancement, you can now reach an audience that is more than double the previous size, all engaging in the same Real-Time Stream.\n The increased limit for subscribers per stage is supported in all AWS Regions where Amazon IVS is available. You can request a quota increase by using the Service Quotas console. To learn more about Amazon IVS Real-Time Streaming quotas, please refer to the service documentation. Amazon IVS is a managed live streaming solution that is designed to be quick and easy to set up, and ideal for creating interactive video experiences. Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.
AWS Blu Insights accelerates migrations with new AI capabilities
We are excited to announce new capabilities for accelerating AWS Mainframe Modernization with machine learning and generative AI assistance. Using the latest generative AI models in Amazon Bedrock and AWS Machine Learning services like Amazon Translate, AWS Blu Insights makes it simple to automatically generate code and file descriptions, transform code from mainframe languages, and query projects using natural language.\n Customers can now automatically generate summaries of source code files and snips, making it much easier to understand legacy mainframe applications. If a codebase has comments in languages other than English, with a click on the console, customers can view a translation of the comments into the English language. Blu Insights also makes it much faster to find information within files. Now, customers can filter data in projects using natural language that Blu Insights automatically converts to specific Blu Age queries. Using GenAI, Blu Insights also speeds up common tasks by classifying codebase files that don’t have an extension, converting source files written in languages like Rexx and C, and creating previews of mainframe BMS screens. Finally, new project management features driven by GenAI simplify project management by taking natural language text like “schedule a meeting” and automating the creation of scheduled events to save time and improve collaboration. Customers can now take advantage of automatically generated Activity Summaries and Activity Audits, which includes the actions taken by AI in a Blu Age project for auditing and compliance purposes. To learn more, visit AWS Mainframe Modernization service and documentation pages.
Amazon EKS introduces cluster creation flexibility for networking add-ons
Starting today, Amazon Elastic Kubernetes Service (EKS) provides the flexibility to create Kubernetes clusters without the default networking add-ons, enabling you to easily install open source or third party alternative add-ons or self-manage default networking add-ons using any Kubernetes lifecycle management tool.\n Every EKS cluster automatically comes with default networking add-ons including Amazon VPC CNI, CoreDNS, and kube-proxy providing critical functionality that enables pod and service operations for EKS clusters. EKS also allows you to bring open source or third party add-ons and tools that manage their lifecycle. With today’s launch, you can skip the installation of default networking add-ons when creating the cluster, making it easier to install alternative add-ons. This also simplifies self-managing default networking add-ons using any lifecycle management tool like Helm or Kustomize, without needing to first remove the Kubernetes manifests of the add-ons from the cluster.
Amazon ECR supports Open Container Initiative Image and Distribution specification version 1.1
Today, Amazon Elastic Container Registry (ECR) announced that it supports Open Container Initiative (OCI) Image and Distribution specification version 1.1, which includes support for Reference Types, simplifying the storage, discovery, and retrieval of artifacts related to a container image. AWS Container Services customers can now easily store, discover, and retrieve artifacts such as image signatures and Software bill of materials (SBOMs) as defined by OCI 1.1 for a variety of supply chain security use cases such as image signing and vulnerability auditing. Through ECR’s support of Reference types, customers now have a simple user experience for distributing and managing artifacts related to these use cases, consistent with how they manage container images today.\n OCI Reference Types support in ECR allows customers to distribute artifacts in their repositories alongside their respective images. Artifacts for a specific image are discovered through their reference relationship, and can be pulled the same way images are pulled. In addition, ECR’s replication feature supports referrers, copying artifacts to destination regions and accounts so they are ready to use alongside replicated images. ECR Lifecycle Policies also supports referring artifacts by deleting references when a subject image is deleted as a result of a lifecycle policy rule expire action, making management of referring artifacts simple with no additional configuration. OCI 1.1 is now supported in ECR in all AWS commercial regions and the AWS GovCloud (US) Regions. OCI 1.1 is also supported in Amazon ECR Public registry. To learn more, please visit our documentation.
Updates and Expansion to the AWS Well-Architected Framework and Lens Catalog
AWS is excited to announce updates to the Well-Architected Framework and Lens Catalog. This latest update brings a comprehensive expansion to customers with expanded guidance on architectural best practices, empowering them to build and maintain optimized, secure, and resilient workloads in the cloud.\n The Framework updates provide more recommendations for AWS services, observability, generative AI, and operating models. We also refreshed the lists of resources and overall Framework structure. This update reduces redundancies, enhances consistency, and empowers customers to more accurately identify and address risks. We also expanded the Lens Catalog in the Well-Architected Tool to include additional industry-specific best practices. The Lens Catalog now includes the new Financial Services Industry Lens and updates to the Mergers and Acquisitions Lens. Additionally, we made significant updates to the Change Enablement in the Cloud whitepaper. With these updates to lenses and guidance, customers can optimize, secure, and align their cloud architectures based on their unique requirements. By leveraging the updated Well-Architected Framework and Lens Catalog, customers can follow the most current and comprehensive architectural best practices to confidently design, deploy, and operate their workloads in the cloud. To learn more about the AWS Well-Architected Framework and Lens Catalog updates, visit the AWS Well-Architected Framework documentation and explore the updated lenses in the Well-Architected Tool.
Announcing Amazon WorkSpaces Pools, a new feature of Amazon WorkSpaces
Amazon Web Services (AWS) announces a new feature of Amazon WorkSpaces, called Amazon WorkSpaces Pools, that helps customers save costs by sharing a pool of virtual desktops across a group of users who get a fresh desktop every time they log in. This new feature provides customers the flexibility and choice to support a wide range of use cases, including training labs, contact centers, and other shared-environments. Some user settings like bookmarks and files stored in a central storage repository like Amazon S3 or Amazon FSx can be saved for improved personalization.\n WorkSpaces Pools also simplifies management across a customer’s WorkSpaces environment by providing a single console and set of clients to manage the various desktop hardware configurations, storage, and applications for the user, including the ability to manage their existing Microsoft 365 Apps for enterprise. Customers use AWS Application AutoScaling to automatically scale a pool of virtual desktops based on real-time usage metrics or predefined schedules. WorkSpaces Pools offers pay-as-you-go hourly pricing, providing significant savings.
With the launch of WorkSpaces Pools, customers now have the option to choose between WorkSpaces Personal, and WorkSpaces Pools. Customer can even opt for a blend of both, with the ease of managing from a single AWS Management Console. WorkSpaces Pools is now available with the usual WorkSpaces bundles including Value, Standard, Performance, Power, and PowerPro. For Region availability details, see AWS Regions and Availability Zones for WorkSpaces Pools. Learn more here.
PostgreSQL 17 Beta 2 is now available in Amazon RDS Database Preview Environment
Amazon RDS for PostgreSQL 17 Beta 2 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 17 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 17 Beta 2 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.\n PostgreSQL 17 includes updates to vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for
JSON_TABLE
features that can convert JSON to a standard PostgreSQL table. TheMERGE
command now supports theRETURNING
clause, letting you further work with modified rows. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions. Please refer to the PostgreSQL community announcement for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the Preview Environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
Amazon RDS Multi-AZ deployment with two readable standbys now supports snapshot export to S3
Amazon Relational Database Service (Amazon RDS) Multi-AZ deployments with two readable standbys now supports export of snapshot data to an Amazon S3 bucket. Amazon RDS Multi-AZ deployments with two readable standbys is ideal when your workloads require lower write latency and more read capacity. In addition, this deployment option supports minor version upgrades and system maintenance updates with typically less than one second of downtime when using Amazon RDS Proxy or open source tools such as AWS Advanced JDBC Driver, PgBouncer, or ProxySQL.\n You can now export Amazon RDS Multi-AZ deployments with two readable standbys snapshot data to an Amazon S3 bucket. The export process runs in the background and doesn’t affect the performance of your cluster. When you export a DB snapshot, Amazon RDS extracts data from the snapshot and stores it in an Amazon S3 bucket. The data is stored in an Apache Parquet format that is compressed and consistent. After the data is exported, you can analyze the exported data directly through tools like Amazon Athena or Amazon Redshift Spectrum. See the Amazon RDS User Guide for a full list of supported Regions and engine versions. Learn more about Amazon RDS Multi-AZ deployments in the AWS News Blog. Create or update fully managed Amazon RDS Multi-AZ databases with two readable standby instances in the Amazon RDS Management Console.
AWS Blogs
AWS Japan Blog (Japanese)
- AWS Summit Japan 2024 | Automotive Industry AWS Booth Edition Recap
- AWS Weekly Roundup: Passkey MFA, Malware Protection on Amazon S3, and More (June 17, 2024)
- Scaling Relational Databases for SaaS (Part 2: Sharding and Routing)
- Scaling Relational Databases for SaaS (Part 1: Common Scaling Patterns)
- In preparation — Taiwan AWS Region
- AWS Audit Manager Extends Generated AI Best Practices Framework to Amazon SageMaker
AWS Japan Startup Blog (Japanese)
AWS News Blog
- Amazon WorkSpaces Pools: Cost-effective, non-persistent virtual desktops
- Introducing end-to-end data lineage (preview) visualization in Amazon DataZone
- Amazon CodeCatalyst now supports GitLab and Bitbucket repositories, with blueprints and Amazon Q feature development
AWS Architecture Blog
AWS Cloud Operations & Migrations Blog
AWS Big Data Blog
- Implement disaster recovery with Amazon Redshift
- Build a real-time streaming generative AI application using Amazon Bedrock, Amazon Managed Service for Apache Flink, and Amazon Kinesis Data Streams
AWS Database Blog
AWS for Industries
- Real-time Analytics on Patient Bedside Medical Devices
- Rapidly experimenting with Catena-X data space technology on AWS
AWS Machine Learning Blog
- The future of productivity agents with NinjaTech AI and AWS Trainium
- Build generative AI applications on Amazon Bedrock — the secure, compliant, and responsible foundation
- Build a conversational chatbot using different LLMs within single interface – Part 1
AWS for M&E Blog
AWS Security Blog
- ACM will no longer cross sign certificates with Starfield Class 2 starting August 2024
- Access AWS services programmatically using trusted identity propagation