6/26/2024, 12:00:00 AM ~ 6/27/2024, 12:00:00 AM (UTC)
Recent Announcements
Amazon Managed Service for Apache Flink introduces the ListApplicationOperations and DescribeApplicationOperation APIs for visibility into operations that were performed on your application. These APIs provide details about when an operation was initiated, its current status, success or failure, if your operation triggered a rollback, and more so that you can take follow-up action.\n Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.
Amazon Managed Service for Apache Flink now supports system-rollback
Amazon Managed Service for Apache Flink introduces the system-rollback feature to automatically revert your application to the previous running application version during Flink job submission if there are code or configuration errors. You can now opt-in to this feature for improved application uptime. You may encounter errors such as insufficient permissions, incompatible savepoints, and other errors when you perform application updates, Flink version upgrades, or scaling actions. System-rollback identifies these errors during job submission and prevents a bad update to your application. This gives you higher confidence in rolling out changes to your application faster.\n Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.
Amazon Route 53 Application Recovery Controller (Route 53 ARC) zonal autoshift is now generally available in the AWS GovCloud (US-East and US-West) Regions. AWS customers and AWS Partners who operate in the AWS GovCloud (US) Regions can now use zonal autoshift, a feature you can enable to safely and automatically shift an application’s traffic away from an Availability Zone (AZ) when AWS identifies a potential failure affecting that AZ. For failures such as power and networking outages, zonal autoshift improves the availability of your application by shifting your application traffic away from an affected AZ to healthy AZs.\n To get started, you can enable zonal autoshift for Application Load Balancer and Network Load Balancer, with cross-zone configuration disabled, using the console, SDK or CLI, or an Amazon CloudFormation template. Once enabled, Amazon will automatically shift application traffic away from an affected AZ, and shift it back after the failure is resolved. Zonal autoshift includes practice runs, a feature that proactively tests if your application has sufficient capacity in each AZ to operate normally even after shifting away from an affected AZ. You configure practice runs to automatically apply zonal shifts, which regularly check if your application can tolerate losing capacity in an AZ.
Amazon OpenSearch Ingestion adds supports to ingest streaming data from Confluent Cloud
Amazon OpenSearch Ingestion now allows you to seamlessly ingest streaming data from Confluent Cloud Kafka clusters into your Amazon OpenSearch Service managed clusters or Serverless collections without the need for any third-party data connectors. With this integration, you can now use Amazon OpenSearch Ingestion to perform near-real-time aggregations, sampling and anomaly detection on data ingested from Confluent Cloud, helping you to build efficient data pipelines to power your complex observability use cases.\n Amazon OpenSearch Ingestion pipelines can consume data from one or more topics in a Confluent Kafka cluster and transform the data before writing it to Amazon OpenSearch Service or Amazon S3. While reading data from Confluent Kafka clusters via Amazon OpenSearch Ingestion, you can configure the number of consumers per topic and tune different fetch parameters for high and low priority data. You can also optionally use Confluent Schema Registry to specify your data schema to dynamically read data at ingest time. You can also check out this blog post by Confluent to learn more about this feature.
Amazon CloudWatch Logs now supports account level subscription filter in 4 additional regions
Amazon CloudWatch Logs is excited to announce support for creating account-level subscription filters using the put-account-policy API in 4 additional regions. This new capability enables you to deliver real-time log events that are ingested into Amazon CloudWatch Logs to an Amazon Kinesis Data Stream, Amazon Kinesis Data Firehose, or AWS Lambda for custom processing, analysis, or delivery to other destinations using a single account level subscription filter.\n Customers often need to forward all or a subset of logs to AWS services such as Amazon OpenSearch for various analytical use cases or Amazon Kinesis Data Firehose for further streaming to other systems. Currently, customers have to set up a subscription filter for each log group. However, with account-level subscription filters, customers can egress logs ingested into multiple or all log groups by setting up a single subscription filter policy for the entire account. This saves time and reduces management overhead. The account-level subscription filter applies to both existing log groups and any future log groups that match the configuration. Each account can create one account-level subscription filter. CloudWatch Logs Account-level Subscription Filter is now available in the AWS GovCloud (US-East) and (US-West) Regions, Israel (Tel Aviv), Canada West (Calgary). To learn more, please refer to the documentation on CloudWatch Logs Account Level Subscription Filters.
AWS CloudShell now supports Amazon Virtual Private Cloud (VPC)
Today, AWS announces the general availability of Amazon Virtual Private Cloud (VPC) support for AWS CloudShell. This allows you to create CloudShell environments in a VPC, which enables you to use CloudShell securely within the same subnet as other resources in your VPC without the need for additional network configuration.\n Prior to this release, there was no mechanism to use CloudShell for controlling the network flow to the internet. This release allows you to securely and conveniently launch CloudShell in your VPC and access the resources within it. AWS CloudShell is a browser-based shell that makes it easy to securely manage, explore, and interact with your AWS resources. CloudShell is pre-authenticated with your console credentials. Common development tools are pre-installed so no local installation or configuration is required. With CloudShell you can run scripts with the AWS Command Line Interface (AWS CLI), define infrastructure with the AWS Cloud Development Kit (AWS CDK), experiment with AWS service APIs using the AWS SDKs, or use a range of other tools to increase your productivity.
To learn more about VPC connectivity in AWS CloudShell see our documentation.
Amazon Linux announces availability of AL2023.5 with new versions of PHP and Microsoft .NET
Today are announcing the availability of the latest quarterly update to AL2023 containing the latest version of PHP and .NET, along with IPA Client and mod-php.\n Customers can take advantage of newer versions of PHP and .NET to ensure their applications are secure and efficient. Additionally, AL2023.5 includes packages like mod-php and IPA client that can improve web server performance and simplify identity management integration, respectively, further streamlining development workflows and enhancing overall system efficiency. To learn more about other features and capabilities in AL2023.5, see release notes. Amazon Linux 2023 is generally available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions. To learn more about Amazon Linux 2023, see the AWS documentation.
Amazon Athena Provisioned Capacity now available in South America (São Paulo) and Europe (Spain)
Today, Amazon Athena made Provisioned Capacity available in the South America (São Paulo) and Europe (Spain) regions. Provisioned Capacity is a feature of Athena that allows you to run SQL queries on fully-managed, dedicated serverless resources for a fixed price and no long-term commitments. Using Provisioned Capacity, you can selectively assign processing capacity to queries and control workload performance characteristics such as query concurrency and cost. You can scale capacity at any time, and pay only for the amount of capacity you need and time it is active in your account.\n Athena is a serverless, interactive query service that makes it possible to analyze petabyte-scale data with ease and flexibility. Provisioned Capacity provides workload management capabilities that help you prioritize, isolate, and scale your interactive query workloads. For example, use Provisioned Capacity if you want to scale capacity to run many queries at the same time or to isolate important queries from others running in your account. To get started, use the Athena console, AWS SDK, or CLI to request capacity for your account and select the workgroups whose queries you want to use the capacity. Provisioned Capacity is also available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Europe (Ireland), Europe (Stockholm). To learn more, see Managing query processing capacity in the Amazon Athena User Guide. To learn more about pricing, visit the Athena pricing page.
EventBridge Scheduler adds more universal targets including Amazon Bedrock
EventBridge Scheduler adds additional universal targets with 650+ more AWS API actions bringing the total to 7000+, including Amazon Bedrock.\n EventBridge Scheduler allows you to create and run millions of scheduled events and tasks across AWS services without provisioning or managing the underlying infrastructure. EventBridge Scheduler supports one time and recurring schedules that can be created using common scheduling expressions such as cron, rate, and specific time with support for time zones and daylight savings. Our support for additional targets allows you to automate more use cases such as scheduling your Bedrock model to run inference for text models, image models, and embedding models at a specific point in time.
Amazon Redshift Serverless with lower base capacity available in additional regions
Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse base capacity configuration of 8 Redshift Processing Units (RPUs) in the AWS Europe (Stockholm) and US West (Northern California) regions. Amazon Redshift Serverless measures data warehouse capacity in RPUs, and you pay only for the duration of workloads you run in RPU-hours on a per-second basis. Previously, the minimum base capacity required to run Amazon Redshift Serverless was 32 RPUs. With the new lower base capacity minimum of 8 RPUs, you now have even more flexibility to a support diverse set of workloads of small to large complexity based on your price performance requirements. You can increment or decrement the RPU in units of 8 RPUs.\n Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. With the new lower capacity configuration, you can use Amazon Redshift Serverless for production environments, test and development environments at an optimal price point when a workload needs a small amount of compute. To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.
AWS Control Tower introduces an API to discover landing zone operations
AWS Control Tower customers can now programmatically retrieve a list of all landing zone operations that have completed in the past 90 days including create, update, reset, and delete. The output contains summary information like the operation identifier, operation type, and status to help identify initiated operations.\n Until today, customers could only retrieve landing zone operations if they requested it by operation identifier or examined all operations. API users on the same team could not view operations performed by others in the same landing zone, resulting in lost context and reduced visibility into all operations. Now customers can easily view, audit and troubleshoot operations for their entire landing zone to avoid duplicate operations and improve overall operational efficiency. To learn more about these APIs, review configurations for landing zone APIs and API References in the AWS Control Tower User Guide. The new APIs are available in AWS Regions where AWS Control Tower is available. For a list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.
AWS Blogs
AWS Japan Blog (Japanese)
- SaaS service connectivity within the VPC Lattice service network
- [Event Report] Accelerating Education with Generative AI! Latest Trends and Practice Guides (2024/05/29)
- Introducing Salsonide’s AWS-Generated AI Case Study “Streamlining Law-related Owned Media Article Creation in Amazon Bedrock and Amazon Kendra’s RAG Environment”
- First Japanese LLM featured on the AWS Marketplace! Explain the procedure for using Oltz’s LHTM-OPT
- Innovations in generative biology (generative biology) with AWS and EvolutionaryScale
AWS Japan Startup Blog (Japanese)
AWS News Blog
AWS Architecture Blog
AWS Cloud Operations & Migrations Blog
AWS Big Data Blog
AWS Database Blog
- Amazon DynamoDB use cases for media and entertainment customers
- Adding real-time ML predictions for your Amazon Aurora database: Part 2
AWS HPC Blog
AWS for Industries
AWS Machine Learning Blog
- Automate derivative confirms processing using AWS AI services for the capital markets industry
- AI-powered assistants for investment research with multi-modal data: An application of Agents for Amazon Bedrock