11/15/2024, 12:00:00 AM ~ 11/18/2024, 12:00:00 AM (UTC)

Recent Announcements

Amazon SageMaker Notebook Instances now support Trainium1 and Inferentia 2 based instances

We are pleased to announce general availability of Trainium1 and Inferentia2 based EC2 instances on SageMaker Notebook Instances.\n Amazon EC2 Trn1 instances, powered by AWS Trainium chips, and Inf2 instances, powered by AWS Inferentia chips, are purpose-built for high-performance deep learning training and inference, respectively. Trn1 instances offer cost savings over other comparable Amazon EC2 instances for training 100B+ parameter generative AI models like large language models (LLMs) and latent diffusion. Inf2 instances deliver low-cost, high-performance inference for generative AI including LLMs and vision transformers. You can use Trn1 and Inf2 instances across a broad set of applications, such as text summarization, code generation, question answering, image and video generation, recommendation, and fraud detection. Amazon EC2 Trn1 instances are available for SageMaker Notebook Instances in AWS US East (N. Virginia and Ohio) and US West (Oregon) regions. Amazon EC2 Trn1n instances are available for SageMaker NBI in AWS US East (N. Virginia and Ohio). Amazon EC2 Inf2 instances are available for SageMaker NBI in AWS US West (Oregon), AWS US East (N. Virginia and Ohio), AWS EU (Ireland), AWS EU (Frankfurt), AWS Asia Pacific (Tokyo), AWS Asia Pacific (Sydney), AWS Asia Pacific (Mumbai), AWS EU (London), AWS Asia Pacific (Singapore), AWS EU (Stockholm), AWS EU (Paris), and AWS South America (São Paulo). Visit developer guide for instructions on setting up and using SageMaker Notebook Instances.

AWS Application Load Balancer announces CloudFront integration with built-in WAF

We are announcing a new one-click integration on Application Load Balancer (ALB) to attach an Amazon CloudFront distribution from the ALB console. This enables the easy use of CF as a distributed single point of entry for your application that ingests, absorbs, and filters all inbound traffic before it reaches your ALB. The features also enables an AWS WAF preconfigured WebACL with basic security protections as a first line of defense against common web threats. Overall, you can easily enable seamless protections from ALB, CloudFront, and AWS WAF with minimal configurations to secure your application.\n Previously to accelerate and secure your applications, you had to configure a CloudFront distribution with proper caching, request forwarding, and security protections that connected to your ALB on the right port and protocol. This required navigation between multiple services and manual configuration. With this new integration, the ALB console handles the creation and configuration of ALB, CloudFront and AWS WAF. CloudFront enables your application’s Cache-Control headers to cache content like HTML, CSS/JavaScript, and images close to viewers, improving performance and reducing load on your application. With an additional checkbox, you can attach a security group configured to allow traffic from CloudFront IP addresses; if maintained as the only inbound rule, it ensures all requests are processed and inspected by CloudFront and WAF. This new integration is available for both new and existing Application Load Balancers. Standard ALB, CloudFront, and AWS WAF pricing apply. The feature is available in all commercial AWS Regions. To learn more about this feature, visit the ALB and CloudFront sections in the AWS User Guide.

AWS IoT Core adds capabilities to enrich MQTT messages and simplify permission management

AWS IoT Core, a managed cloud service that lets you securely connect Internet of Things (IoT) devices to the cloud and manage them at scale, announces two new capabilities - ability to enrich MQTT messages with additional data and use thing-to-connection association for simplifying permission management. Message enrichment capability enables developers to augment MQTT messages from devices with additional information from thing registry, without modifying their devices. The thing-to-connection association enables mapping an MQTT client to a registry thing, for client IDs that don’t match thing name. This will enable developers to leverage registry information in IoT policies, easily associate device actions to lifecycle events, and utilize existing capabilities like custom cost allocation and resource-specific logging, previously only available for matching client IDs and thing names.\n To enrich all messages from devices, developers can define a subset of registry attributes as propagating attributes. They can customize their message routing, processing workflows using this appended data. For example, in automotive applications, developers can selectively route messages to the desired backend depending on the appended metadata, such as vehicle make and type stored in thing registry. Additionally, with thing-to-connection association, developers can leverage existing features like using registry metadata in IoT policies, associate AWS IoT Core lifecycle events to a thing, do custom cost allocation through billing groups, and enable resource-specific logging, even if MQTT client ID and thing name differ. These new features are available in all AWS regions where AWS IoT Core is present. For more information refer to the developer guide and API documentation.

Amazon CloudWatch launches Observability Solutions for AWS Services and Workloads on AWS

Observability solutions help you get up-and-running faster with infrastructure and application monitoring at AWS. They are intended for developers who need opinionated guidance about the best options for observing AWS services, custom applications, and third-party workloads. Observability solutions include working examples of instrumentation, telemetry collection, custom dashboards, and metric alarms.\n Using observability solutions, you can select from a catalog of available solutions that deliver focused observability guidance for AWS services and common workloads such as Java Virtual Machine (JVM), Apache Kafka, Apache Tomcat, or NGINX. Solutions cover monitoring tasks including installing and configuring Amazon CloudWatch agent, deploying pre-defined custom dashboards and setting metric alarms. Observability solutions also include guidance about observability features such as Detailed Monitoring metrics for infrastructure, Container Insights for container monitoring, and Application Signals for monitoring applications. Solutions are available for Amazon CloudWatch and Amazon Managed Service for Prometheus. Observability solutions can be deployed as-is or customized to suit specific use cases, with options for enabling features or configuring deployments based on workload needs. Observability solutions are available in all commercial regions. To get started with observability solutions, navigate to the observability solutions page in the CloudWatch console.

Amazon Data Firehose supports continuous replication of database changes to Apache Iceberg Tables in Amazon S3

Amazon Data Firehose now enables capture and replication of database changes to Apache Iceberg Tables in Amazon S3 (Preview) . This new feature allows customers to easily stream real-time data from MySQL and PostgreSQL databases directly into Apache Iceberg Tables.\n Firehose is a fully managed, serverless streaming service that enables customers to capture, transform, and deliver data streams into Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and other destinations for analytics. With this functionality, Firehose performs an initial complete data copy from selected database tables, then continuously streams Change Data Capture (CDC) updates to reflect inserts, updates, and deletions in the Apache Iceberg Tables .This streamlined solution eliminates complex data pipeline setups while minimizing impact on database transaction performance . Key capabilities include: • Automatic creation of Apache Iceberg Tables matching source database schemas • Automatic schema evolution in response to source changes • Selective replication of specific databases, tables, and columns This preview feature is available in all AWS regions except China, AWS GovCloud (US), and Asia Pacific (Malaysia) Regions. For terms and conditions, see Beta Service Participation in AWS Service Terms. To get started, visit Amazon Data Firehose documentation and console. To learn more about this feature, visit this AWS blog post.

Introducing Amazon Route 53 Resolver DNS Firewall Advanced

Today, AWS announced Amazon Route 53 Resolver DNS Firewall Advanced, a new set of capabilities on Route 53 Resolver DNS Firewall that allow you to monitor and block suspicious DNS traffic associated with advanced DNS threats, such as DNS tunneling and Domain Generation Algorithms (DGAs), that are designed to avoid detection by threat intelligence feeds or are difficult for threat intelligence feeds alone to track and block in time.\n Today, Route 53 Resolver DNS Firewall helps you block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. With DNS Firewall Advanced, you can now enforce additional protections that monitor and block your DNS traffic in real-time based on anomalies identified in the domain names being queried from your VPCs. To get started, you can configure one or multiple DNS Firewall Advanced rule(s), specifying the type of threat (DGA, DNS tunneling) to be inspected. You can add the rule(s) to a DNS Firewall rule group, and enforce it on your VPCs by associating the rule group to each desired VPC directly or by using AWS Firewall Manager, AWS Resource Access Manager (RAM), AWS CloudFormation, or Route 53 Profiles. Route 53 Resolver DNS Firewall Advanced is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about the new capabilities and the pricing, visit the Route 53 Resolver DNS Firewall webpage and the Route 53 pricing page. To get started, visit the Route 53 documentation.

Amazon Time Sync Service supports Microsecond-Accurate Time in Stockholm Region

The Amazon Time Sync Service now supports clock synchronization within microseconds of UTC on Amazon EC2 instances in the Europe (Stockholm) region.\n Built on Amazon’s proven network infrastructure and the AWS Nitro System, customers can now access local, GPS-disciplined reference clocks on supported EC2 instances. These clocks can be used to more easily order application events, measure 1-way network latency, increase distributed application transaction speed, and incorporate in-region and cross-region scalability features while also simultaneously simplifying technical designs. This capability is an improvement over many on-premises time solutions, and it is the first microsecond-range time service offered by any cloud provider. Additionally, you can audit your clock accuracy from your instance to measure and monitor the expected microsecond-range accuracy. Customers already using the Amazon Time Sync Service on supported instances will see improved clock accuracy automatically, without needing to adjust their AMI or NTP client settings. Customers can also use standard PTP clients and configure a new PTP Hardware Clock (PHC) to get the best accuracy possible. Both NTP and PTP can be used without needing any updates to VPC configurations. Amazon Time Sync’s microsecond-accurate time is available starting today in Europe (Stockholm), as well as additional regions on supported EC2 instance types. We will be expanding support to more AWS Regions and EC2 instance types. There is no additional charge for using this service. Configuration instructions, and more information on the Amazon Time Sync Service, are available in the EC2 User Guide.

AWS Organizations member accounts can now regain access to accidentally locked Amazon S3 buckets

AWS Organizations member accounts can now use a simple process through AWS Identity and Access Management (IAM) to regain access to accidentally locked Amazon S3 buckets. With this capability, you can repair misconfigured S3 bucket policies while improving your organization’s security and compliance posture.\n IAM now provides centralized management of long-term root credentials, helping you prevent unintended access and improving your account security at scale in your organization. You can also perform a curated set of root-only tasks, using short-lived and privileged root sessions. For example, you can centrally delete an S3 bucket policy in just a few steps. First, navigate to the Root access management page in the IAM console, select an account, and choose Take privileged action. Next, select Delete bucket policy and select your chosen S3 bucket. AWS Organization member accounts can use this capability in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. Customers can use this new capability via the IAM console or programmatically using the AWS CLI or SDK. For more information, visit the AWS News Blog and IAM documentation.

AWS Partner CRM Connector Adds Partner Central API Support

Starting today, the AWS Partner CRM Connector further simplifies co-sell actions between Salesforce and AWS Partner Central through APN Customer Engagement (ACE) integration. Partners can now share and receive AWS opportunities faster through the Partner Central API, use multi-object mapping to simplify related field mapping and reduce redundant data between Salesforce and ACE Pipeline Manager, and receive submission updates via EventBridge, making it easier than ever to supercharge co-selling and sales motions.\n These new capabilities enable partners manage AWS co-sell opportunities with increased speed and flexibility. The Partner Central API accelerates information sharing, while EventBridge pushes real-time update notifications for key actions as they occur. Multi-object mapping adds another layer of efficiency, giving partners control over data flow by simplifying account look-ups and reducing repetitive entries across Salesforce fields and business workflows. This modular connector provides greater governance, visibility, and effectiveness in management of ACE opportunities and leads, and AWS Marketplace private offers and resale authorizations. It enables automation through sales process alignment, and accelerates adoption through the extension of capabilities to field sales teams. The AWS Partner CRM Connector for Salesforce is available as an application to install at no-cost from the Salesforce AppExchange. Visit AWS Partner Central documentation to learn more, and learn more about the CRM Connector in the AWS Partner CRM Integration documentation.

Amazon SageMaker now provides new set up experience for Amazon DataZone projects

Amazon SageMaker now provides a new set up experience for Amazon DataZone projects, making it easier for customers to govern access to data and machine learning (ML) assets. With this capability, administrators can now set up Amazon DataZone projects by importing their existing authorized users, security configurations, and policies from Amazon SageMaker domains.\n Today, Amazon SageMaker customers use domains to organize list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud configurations. With this launch, administrators can now accelerate the process of setting up governance for data and ML assets in Amazon SageMaker. They can import users and configurations from existing SageMaker domains to Amazon DataZone projects, mapping SageMaker users to corresponding Amazon DataZone project members. This enables project members to search, discover, and consume ML and data assets within Amazon SageMaker capabilities such as Studio, Canvas, and notebooks. Also, project members can publish these assets from Amazon SageMaker to the DataZone business catalog, enabling other project members to discover and request access to them. This capability is available in all Amazon Web Services regions where Amazon SageMaker and Amazon DataZone are currently available. To get started, see the Amazon SageMaker administrator guide.

Centrally manage root access in AWS Identity and Access Management (IAM)

Today, AWS Identity and Access Management (IAM) is launching a new capability allowing customers to centrally manage their root credentials, simplify auditing of credentials, and perform tightly scoped privileged tasks across their AWS member accounts managed using AWS Organizations.\n Now, administrators can remove unnecessary root credentials for member accounts in AWS Organizations and then, if needed, perform tightly scoped privileged actions using temporary credentials. By removing unnecessary credentials, administrators have fewer highly privileged root credentials that they must secure with multi-factor authentication (MFA), making it easier to effectively meet MFA compliance requirements. This helps administrators control highly privileged access in their accounts, reduces operational effort, and makes it easier for them to secure their AWS environment. The capability to manage root access in AWS member accounts is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To get started managing your root access in IAM, visit the list of resources below:

See AWS News Blog

Learn more with AWS Documentation

Get started in AWS IAM console

Split cost allocation data for Amazon EKS now supports metrics from Amazon CloudWatch Container Insights

Starting today, you can use CPU and memory metrics collected by Amazon CloudWatch Container Insights for your Amazon Elastic Kubernetes Service (EKS) clusters in split cost allocation data for Amazon EKS, so you can get granular Kubernetes pod-level costs and make it available in AWS Cost and Usage Reports (CUR). This provides more granular cost visibility for your clusters running multiple application containers using shared EC2 instances, enabling better cost allocation for the shared costs of your EKS clusters.\n To enable this feature, you need to enable Container Insights with Enhanced Observability for Amazon Elastic Kubernetes Service (EKS). You can use either the Amazon CloudWatch Observability EKS add-on or the Amazon CloudWatch Observability Helm chart to install the CloudWatch agent and the Fluent-bit agent on an Amazon EKS cluster. You also need to enable split cost allocation data for Amazon EKS in the AWS Billing and Cost Management console, and choose Amazon CloudWatch as the metrics source. Once the feature is enabled, the pod-level usage data will be available in CUR within 24 hours. This feature is available in all AWS Regions where split cost allocation data for Amazon EKS is available. To get started, visit Understanding split cost allocation data. To learn more about Container Insights product and pricing, visit Container Insights and Amazon CloudWatch Pricing.

AWS Control Tower launches configurable managed controls implemented using resource control policies

Today we are excited to announce the launch of AWS managed controls implemented using resource control policies (RCPs) in AWS Control Tower. These new optional preventive controls help you centrally apply organization-wide access controls around AWS resources in your organization. Additionally, you can now configure the new RCP and existing service control policies (SCP) preventive controls to specify AWS IAM (principal and resource) exemptions where applicable. Exemptions can be configured when you don’t want a principal or a resource to be governed by the control. To see a full list of the new controls, see the controls reference guide.\n With this addition, AWS Control Tower now supports over 30 configurable preventive controls, providing off-the-shelf AWS-managed controls to help you scale your business using new AWS workloads and services. At launch, you can enable AWS Control Tower RCPs for Amazon Simple Storage Service, AWS Security Token Service, AWS Key Management Service, Amazon Simple Queue Service, and AWS Secrets Manager service. For example, an RCP can enforce the requirement that “Require the organization’s Amazon S3 resources to be accessible only by IAM principals that belong to the organization,” regardless of the permissions granted on individual S3 bucket policies. AWS Control Tower’s new RCP based preventive controls are available in all AWS commercial Regions where AWS Control Tower is available. For a full list of AWS regions where AWS Control Tower is available, see AWS Region Table.

Amazon EventBridge event delivery latency metric now in the AWS GovCloud (US) Regions

The Amazon EventBridge Event Bus end-to-end event delivery latency metric in Amazon CloudWatch that tracks the duration between event ingestion and successful delivery to the targets on your Event Bus is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. This new IngestionToInvocationSuccessLatency allows you to now detect and respond to event processing delays caused by under-performing, under-scaled, or unresponsive targets.\n Amazon EventBridge Event Bus is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up rules to determine where to send your events, allowing for applications to react to changes in your systems as they occur. With the new IngestionToInvocationSuccessLatency metric you can now better monitor and understand event delivery latency to your targets, increasing the observability of your event-driven architecture. To learn more about the new IngestionToInvocationSuccessLatency metric for Amazon EventBridge Event Buses, please read our blog post and documentation.

AWS Transfer Family is now available in the AWS Asia Pacific (Malaysia) Region

Customers in AWS Asia Pacific (Malaysia) Region can now use AWS Transfer Family.\n AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2). In addition to file transfers, Transfer Family enables common file processing and event-driven automation for managed file transfer (MFT) workflows, helping customers to modernize and migrate their business-to-business file transfers to AWS. To learn more about AWS Transfer Family, visit our product page and user-guide. See the AWS Region Table for complete regional availability information.

AWS Backup now supports resource type and multiple tag selections in backup policies

Today, AWS Backup announces additional options to assign resources to a backup policy on AWS Organizations. Customers can now select specific resources by resource type and exclude them based on resource type or tag. They can also use the combination of multiple tags within the same resource selection.\n With additional options to select resources, customers can implement flexible backup strategies across their organizations by combining multiple resource types and/or tags. They can also exclude resources they do not want to back up using resource type or tag, optimizing cost on non-critical resources. To get started, use your AWS Organizations’ management account to create or edit an AWS Backup policy. Then, create or modify a resource selection using the AWS Organizations’ API, CLI, or JSON editor in either the AWS Organizations or AWS Backup console. AWS Backup support for enhanced resource selection in backup policies is available in all commercial regions where AWS Backup’s cross account management is available. For more information, visit our documentation and launch blog.

Amazon EC2 G6 instances now available in the AWS GovCloud (US-West) Region

Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are now available in the AWS GovCloud (US-West) Region. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases.\n Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage. Customers can purchase G6 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page.

Amazon S3 Access Grants now integrate with Amazon Redshift

Amazon S3 Access Grants now integrate with Amazon Redshift. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta, to datasets stored in Amazon S3, helping you to easily manage data permissions at scale. This integration gives you the ability to manage S3 permissions for AWS IAM Identity Center users and groups when using Redshift, without the need to write and maintain bucket policies or individual IAM roles.\n Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in your IdP by connecting S3 with IAM Identity Center. Then, when you use Identity Center authentication for Redshift, end users in the appropriate user groups will automatically have permission to read and write data in S3 using COPY, UNLOAD, and CREATE LIBRARY SQL commands. S3 Access Grants then automatically update S3 permissions as users are added and removed from user groups in the IdP. Amazon S3 Access Grants with Amazon Redshift are available for users federated via IdP in all AWS Regions where AWS IAM Identity Center is available. For pricing details, visit Amazon S3 pricing and Amazon Redshift pricing. To learn more about S3 Access Grants, refer to the documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

Containers

AWS Database Blog

AWS for Industries

AWS Machine Learning Blog

AWS for M&E Blog

Networking & Content Delivery

AWS Security Blog

AWS Storage Blog

Open Source Project

AWS CLI

AWS CDK

Bottlerocket OS