11/25/2024, 12:00:00 AM ~ 11/26/2024, 12:00:00 AM (UTC)

Recent Announcements

AWS Cloud WAN simplifies on-premises connectivity via AWS Direct Connect

AWS Cloud WAN now supports native integration with AWS Direct Connect, simplifying connectivity between your on-premises networks and the AWS cloud. The new capability enables you to directly attach your Direct Connect gateways to Cloud WAN without the need for an intermediate AWS Transit Gateway, allowing seamless connectivity between your data centers or offices with AWS Virtual Private Clouds (VPCs) across AWS regions globally.\n Cloud WAN allows you to build, monitor, and manage a unified global network that interconnects your resources in the AWS cloud and your on-premises environments. Direct Connect allows you to create a dedicated network connection to AWS, bypassing the public Internet. Until today, customers needed to deploy an intermediate transit gateway to interconnect their Direct Connect-based networks with Cloud WAN. Starting today, you can directly attach your Direct Connect gateway to a Cloud WAN core network simplifying connectivity between your on-premises locations and VPCs. The new Cloud WAN Direct Connect attachment adds support for automatic route propagation between AWS and on-premises networks using Border Gateway Protocol (BGP). Direct Connect attachments also supports existing Cloud WAN features such as central policy-based management, tag-based attachment automation and segmentation for advanced security. The new Direct Connect attachment for Cloud WAN is initially available in eleven commercial regions. Pricing for Direct Connect attachment is the same as any other Cloud WAN attachment. For additional information, please visit Cloud WAN documentation, pricing page and blog post.

Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets

Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets using bucket policies. With enforcement of conditional writes, you can now mandate that S3 check the existence of an object before creating it in your bucket. Similarly, you can also mandate that S3 check the state of the object’s content before updating it in your bucket. This helps you to simplify distributed applications by preventing unintentional data overwrites, especially in high-concurrency, multi-writer scenarios.\n To enforce conditional write operations, you can now use s3:if-none-match or s3:if-match condition keys to write a bucket policy that mandates the use of HTTP if-none-match or HTTP if-match conditional headers in S3 PutObject and CompleteMultipartUpload API requests. With this bucket policy in place, any attempt to write an object to your bucket without the required conditional header will be rejected. You can use this to centrally enforce the use of conditional writes across all the applications that write to your bucket. You can use bucket policies to enforce conditional writes at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Amazon S3 adds new functionality for conditional writes

Amazon S3 can now perform conditional writes that evaluate if an object is unmodified before updating it. This helps you coordinate simultaneous writes to the same object and prevents multiple concurrent writers from unintentionally overwriting the object without knowing the state of its content. You can use this capability by providing the ETag of an object using S3 PutObject or CompleteMultipartUpload API requests in both S3 general purpose and directory buckets.\n Conditional writes simplify how distributed applications with multiple clients concurrently update data across shared datasets. Similar to using the HTTP if-none-match conditional header to check for the existence of an object before creating it, clients can now perform conditional-write checks on an object’s Etag, which reflects changes to the object, by specifying it via the HTTP if-match header in the API request. S3 then evaluates if the object’s ETag matches the value provided in the API request before committing the write and prevents your clients from overwriting the object until the condition is satisfied. This new conditional header can help improve the efficiency of your large-scale analytics, distributed machine learning, and other highly parallelized workloads by reliably offloading compare and swap operations to S3. This new conditional-write functionality is available at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Announcing Idle Disconnect Timeout for Amazon WorkSpaces

Amazon WorkSpaces now supports Idle Disconnect Timeout for Windows WorkSpaces Personal with the Amazon DCV protocol. WorkSpaces administrators can now configure how long a user can be inactive while connected to a personal WorkSpace, before they are disconnected. This setting is already available for WorkSpaces Pools, but this launch includes end user notifications for idle users, warning that their session will be disconnected soon, for both Personal and Pools.\n Idle Disconnect Timeout helps Amazon WorkSpaces administrators better optimize costs and resources for their fleet. This feature helps ensure that customers who pay for their resources hourly are only paying for the WorkSpaces that are actually in use. The notifications also provide improved overall user experience for both Personal and Pools end users, by warning them about the pending disconnection and giving them a chance to continue or save their work beforehand. Idle Disconnect Timeout is available at no additional cost for Windows WorkSpaces running DCV, in all the AWS Regions where WorkSpaces is currently available. To get started with Amazon WorkSpaces, see Getting Started with Amazon WorkSpaces. To enable this feature, you must be using Windows WorkSpaces Personal DCV host agent version 2.1.0.1554 or later. Your users must be on WorkSpaces Windows or macOS client versions 5.24 or later, WorkSpaces Linux client version 2024.7 or later, or on Web Access. Refer to the client version release notes for more details. To learn more, visit Manage your Windows WorkSpaces in the Amazon WorkSpaces Administrator Guide.

AWS delivers enhanced root cause insights to help explain cost anomalies

Today, AWS announces new enhanced root cause analysis capabilities for AWS Cost Anomaly Detection, empowering you to better pinpoint and remediate underlying factors driving unplanned cost increases. By creating anomaly monitors, you can analyze spend across services, member accounts, Cost Allocation Tags, and Cost Categories. Once a cost anomaly is detected, Cost Anomaly Detection now analyzes and ranks all possible combinations of services, accounts, regions, and usage types by cost impact, surfacing up to the top 10 root causes with their corresponding cost contributions.\n With more information on the key drivers behind an anomaly, you can better identify the specific factors that contributed the most to a cost spike, such as which combination of linked account, region, and usage type led to increased spend in a particular service. With the top root causes ranked by their cost impact, you can more easily take fast, targeted actions to address these key issues before unplanned costs accrue further. The enhanced root cause analysis is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about this new feature, AWS Cost Anomaly Detection, and how to reduce your risk of spend surprises, visit the AWS Cost Anomaly Detection product page, documentation, and launch blog.

Self-Service Know Your Customer (KYC) for AWS Marketplace Sellers

AWS Marketplace now offers a self-service Know Your Customer (KYC) feature for all sellers wishing to transact via the AWS Europe, Middle East, and Africa (EMEA) Marketplace Operator. The KYC verification process is required for sellers to receive disbursements via the AWS EMEA Marketplace Operator. This new self-service feature helps sellers complete this KYC process quickly and easily, and unblocks their business growth in EMEA region.\n Completing KYC and onboarding to EMEA Marketplace operator allows sellers to provide a more localized experience for their customers. Customers will see consistent Value Added Tax (VAT) charges across all their AWS purchases. They can also pay using their local bank accounts through Single Euro Payment Area (SEPA) for AWS Marketplace Invoices. Additionally, customers will get invoices for all their AWS services and Marketplace purchases from a single entity - AWS EMEA. This makes billing and procurement much simpler for customers in Europe, the Middle East, and Africa. The new self-service KYC experience empowers sellers to complete verification independently, reducing the time to onboard and eliminating the need to coordinate with AWS Marketplace support team. We invite all AWS Marketplace sellers to take advantage of this new feature to expand their reach in the EMEA region and provide an improved purchasing experience for their customers. To get started, please visit the AWS Marketplace Seller Guide.

Amazon DataZone now enhances data access governance with enforced metadata rules

Amazon DataZone now supports enforced metadata rules for data access workflows, providing organizations with enhanced capabilities to strengthen governance and compliance with their organization needs. This new feature allows domain owners to define and enforce mandatory metadata requirements, ensuring data consumers provide essential information when requesting access to data assets in Amazon DataZone. By streamlining metadata governance, this capability helps organizations meet compliance standards, maintain audit readiness, and simplify access workflows for greater efficiency and control.\n With enforced metadata rules, domain owners can establish consistent governance practices across all data subscriptions. For example, financial services organizations can mandate specific compliance-related metadata when data consumers request access to sensitive financial data. Similarly, healthcare providers can enforce metadata requirements to align with regulatory standards for patient data access. This feature simplifies the approval process by guiding data consumers through completing mandatory fields and enabling data owners to make informed decisions, ensuring data access requests meet organizational policies. The feature is supported in all the AWS commercial regions where Amazon DataZone is currently available. Check out this blog and video to learn more about how to set up metadata rules for subscription workflows. Get started with the technical documentation.

AWS Marketplace introduces AI-powered product summaries and comparisons

AWS Marketplace now provides AI-powered product summaries and comparisons for popular software as a service (SaaS) products, helping you make faster and more informed software purchasing decisions. Use this feature to compare similar SaaS products across key evaluation criteria such as customer reviews, product popularity, features, and security credentials. Additionally, you can gain AI-summarized insights into key decision factors like ease of use, customer support, and cost effectiveness.\n Sifting through thousands of options on the web to find software products that best fit your business needs can be challenging and time-consuming. The new product comparisons feature in AWS Marketplace helps with simplifying this process for you. It leverages machine learning to recommend similar SaaS products for consideration. It then uses generative AI to summarize product information and customer reviews, highlight unique aspects of products, and helps you understand key differences to identify the best product for your use cases. You can also customize the comparison sets and download comparisons tables to share with colleagues. The product comparisons feature is available for popular SaaS products in all commercial AWS Regions where AWS Marketplace is available. Check out AI-generated product summaries in AWS Marketplace. Find the new experience on popular SaaS product pages such as Databricks Data Intelligence Platform and Trend Cloud One. To learn more about how the experience works, visit the AWS Marketplace Buyer Guide.

AWS Control Tower adds prescriptive backup plans to landing zone capabilities

Today, AWS Control Tower added AWS Backup to the list of AWS services you can optionally configure with prescriptive guidance. This configuration option allows you to select from a range of recommended backup plans, seamlessly integrating data backup and recovery workflows into your Control Tower landing zone and organizational units. A landing zone is a well-architected, multi-account AWS environment based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, logging, account structure, and with this launch adds data retention.\n When you choose to enable AWS Backup on your landing zone, and then select applicable organizational units, Control Tower creates a backup plan with predefined rules, like retention days, frequency, and time window during which backups occur, that define how to backup AWS resources across all governed member accounts. Applying the backup plan at the Control Tower landing zone ensures it is consistent for all member accounts in-line with best practice recommendations from AWS Backup. For a full list of Regions where AWS Control Tower is available, see the AWS Region Table. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide.

Amazon Q Developer now transforms embedded SQL from Oracle to PostgreSQL

When you use AWS Database Migration Service (DMS) and DMS Schema Conversion to migrate a database, you might need to convert the embedded SQL in your application to be compatible with your target database. Rather than converting it manually, you can use Amazon Q Developer in the IDE to automate the conversion.\n Amazon Q Developer uses metadata from a DMS Schema Conversion to convert embedded SQL in your application to a version that is compatible with your target database. Amazon Q Developer will detect Oracle SQL statements in your application and convert them to PostgreSQL. You can review and accept the proposed changes, view a summary of the transformation, and follow the recommended next steps in the summary to verify and test the transformed code. This capability is available within the Visual Studio Code and IntelliJ IDEs. Learn more and get started here.

AWS Billing and Cost Management Data Exports for FOCUS 1.0 is now generally available

Today, AWS announces the general availability (GA) of Data Exports for FOCUS 1.0, which has been in public preview since June 2024. FOCUS 1.0 is an open-source cloud cost and usage specification that provides standardization to simplify cloud financial management across multiple sources. Data Exports for FOCUS 1.0 enables customers to export their AWS cost and usage data with the FOCUS 1.0 schema to Amazon S3. The GA release of FOCUS 1.0 is a new table in Data Exports in which key specification conformance gaps have been solved compared to the preview table.\n With Data Exports for FOCUS 1.0 (GA), customers receive their costs in four standardized columns, ListCost, ContractedCost, BilledCost, and EffectiveCost. It provides a consistent treatment of discounts and amortization of Savings Plans and Reserved Instances. The standardized schema of FOCUS ensures data can be reliably referenced across sources. Data Exports for FOCUS 1.0 (GA) is available in the US East (N. Virginia) Region, but includes cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions. Learn more about Data Exports for FOCUS 1.0 in the User Guide, product details page, and at the FOCUS project webpage. Get started by visiting the Data Exports page in the AWS Billing and Cost Management console and creating an export of the new GA table named “FOCUS 1.0 with AWS columns”. After creating a FOCUS 1.0 GA export, you will no longer need your preview export. You can view the specification conformance of the GA release here.

Amazon Managed Service for Apache Flink now supports Amazon Managed Service for Prometheus as a destination

Today, AWS announced support for a new Apache Flink connector for Amazon Managed Service for Prometheus. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Managed Service for Prometheus as a new destination for Apache Flink. You can now manage your Prometheus metrics data cardinality by pre-processing raw data with Apache Flink to build real-time observability with Amazon Managed Service for Prometheus and Grafana.\n Amazon Managed Service for Prometheus is a secure, serverless, scaleable, Prometheus-compatible monitoring service. You can use the same open-source Prometheus data model and query language that you use today to monitor the performance of your workloads without having to manage the underlying infrastructure. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to send processed data to an Amazon Managed Service for Prometheus destination starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink you can transform and analyze data in real time. There are no servers and clusters to manage, and there is no compute and storage infrastructure to set up. You can learn more about Amazon Managed Service for Apache Flink and Amazon Managed Service for Prometheus in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Managed Service for Prometheus region availability, refer to the AWS Region Table.

Amazon SageMaker introduces Scale Down to Zero for AI inference to help customers save costs

We are excited to announce Scale Down to Zero, a new capability in Amazon SageMaker Inference that allows endpoints to scale to zero instances during periods of inactivity. This feature can significantly reduce costs for running inference using AI models, making it particularly beneficial for applications with variable traffic patterns such as chatbots, content moderation systems, and other generative AI usecases.\n With Scale Down to Zero, customers can configure their SageMaker inference endpoints to automatically scale to zero instances when not in use, then quickly scale back up when traffic resumes. This capability is effective for scenarios with predictable traffic patterns, intermittent inference traffic, and development/testing environments. Implementing Scale Down to Zero is simple with SageMaker Inference Components. Customers can configure auto-scaling policies through the AWS SDK for Python (Boto3), SageMaker Python SDK, or the AWS Command Line Interface (AWS CLI). The process involves setting up an endpoint with managed instance scaling enabled, configuring scaling policies, and creating CloudWatch alarms to trigger scaling actions. Scale Down to Zero is now generally available in all AWS regions where Amazon SageMaker is supported. To learn more about implementing Scale Down to Zero and optimizing costs for generative AI deployments, please visit our documentation page.

Amazon Q Developer Pro tier introduces a new, improved dashboard for user activity

Amazon Q Developer Pro tier now provides a detailed usage activity dashboard that gives administrators greater visibility into how their subscribed users are leveraging Amazon Q Developer features and improving their productivity. The dashboard offers insights into user activity metrics, including the number of AI-generated code lines and the acceptance rate of individual features such as, inline code and chat suggestions in developer’s integrated development environment (IDE). This information enables administrators to monitor usage and evaluate productivity gains achieved through Amazon Q Developer.\n New customers will have this usage dashboard enabled by default. Existing Amazon Q Developer administrators can activate the dashboard through the AWS Management Console to start tracking detailed usage metrics. Existing customers can also continue to view a copy of the previous set of metrics and usage data, in addition to the new detailed usage metrics dashboard. To learn more about this feature, visit Amazon Q Developer User Guide. These improvements come in conjunction with the recently launched per-user activity report and last activity date features for Amazon Q Developer admins, further enhancing visibility and control over user activity. To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

Amazon EC2 Capacity Blocks now supports instant start times and extensions

Today, Amazon Web Services announces three new features for Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML that enable you to get near-instantaneous access to GPU and ML chip instances through Capacity Blocks, extend the durations of your Capacity Blocks, and reserve Capacity Blocks for longer periods of up to six months. With these new features, you have more options to provision GPU and ML chip capacity to meet your machine learning (ML) workload needs.\n With Capacity Blocks, you can reserve GPU and ML chip capacity in cluster sizes of one to 64 instances (512 GPUs, or 1,024 Trainium chips), giving you the flexibility to run a wide variety of ML workloads. Starting today, you can provision Capacity Blocks that begin in just minutes, enabling you to quickly access GPU and ML chip capacity. You can also extend your Capacity Block when your ML job takes longer than you anticipated, ensuring uninterrupted access to capacity. Finally, for projects that require GPU or ML chip capacity for longer durations, you can now provision Capacity Blocks for up to six months, allowing you to get capacity for just the amount of time you need. EC2 Capacity Blocks are available for P5e, P5, P4d, and Trn1 instances in US East (N. Virginia and Ohio), US West (Oregon), Asia Pacific (Tokyo and Melbourne). See the User Guide for a detailed breakdown of instance availability by region. To learn more, see the Amazon EC2 Capacity Blocks for ML User Guide.

AWS Backup now supports Amazon Timestream in Asia Pacific (Mumbai)

Today, we are announcing the availability of AWS Backup support for Amazon Timestream for LiveAnalytics in the Asia Pacific (Mumbai) Region. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon Timestream for LiveAnalytics along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.\n With this launch, AWS Backup support for Amazon Timestream for LiveAnalytics is available in the following Regions: US East (N. Virginia, Ohio, Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland). For more information on regional availability, feature availability, and pricing, see the AWS Backup pricing page and the AWS Backup Feature Availability page. To learn more about AWS Backup support for Amazon Timestream for LiveAnalytics, visit AWS Backup’s technical documentation. To get started, visit the AWS Backup console.

AWS DMS now supports Data Masking

AWS Database Migration Service (AWS DMS) now supports Data Masking, enabling customers to transform sensitive data at the column level during migration, helping to comply with data protection regulations like GDPR. Using AWS DMS, you can now create copies of data that redacts information at a column level that you need to protect.\n AWS Data Masking will automatically mask the portions of data you specify. Data Masking offers three transformation techniques: digit randomization, digit masking, and hashing. It’s available for all endpoints supported by DMS Classic and DMS Serverless in version 3.5.4. To learn more about Data Masking with AWS DMS, please please refer to the AWS DMS technical documentation.

Amazon Connect now allows agents to self-assign tasks

Amazon Connect now allows agents to create and assign a task to themselves by checking a box from the agent workspace or contact control panel (CCP). For example, an agent can schedule a follow up action to update to a customer by scheduling a task for a preferred time and checking the self assignment option. Amazon Connect Tasks empowers you to prioritize, assign, and track all contact center agent tasks to completion, improving agent productivity and ensuring customer issues are quickly resolved.\n This feature is supported in all AWS regions where Amazon Connect is offered. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Enhanced Pricing Calculator now supports discounts and purchase commitments (in preview)

Today, AWS announces the public preview of the enhanced AWS Pricing Calculator that provides accurate cost estimates for new workloads or modifications to your existing AWS usage by incorporating eligible discounts. It also helps you estimate the cost impact of your commitment purchases and their impact to your organization’s consolidated bill. With today’s launch, AWS Pricing Calculator now allows you to apply eligible discounts to your cost estimates, enabling you to make informed financial planning decisions.\n The enhanced Pricing Calculator, available within the AWS Billing and Cost Management Console, provides two types of cost estimates: cost estimation for a workload, and estimation of a full AWS bill. Using the enhanced Pricing Calculator, you can import your historical usage or create net new usage when creating a cost estimate. You can also get started by importing existing Pricing Calculator estimates, and sharing an estimate with other AWS console users. Using the enhanced Pricing Calculator, you can confidently assess the cost impact and understand your return on investment for migrating workloads, planning new workloads or growth of existing workloads. You can plan for commitment purchases on the AWS cloud. You can also create or access cost estimates using a new public cost estimations API.

The enhanced Pricing Calculator is available in all AWS commercial regions, excluding China. To get started with new Pricing Calculator, visit the AWS Billing and Cost Management Console. To learn more visit the AWS Pricing Calculator user guide and blog.

Amazon S3 Express One Zone now supports conditional deletes

Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, can now evaluate whether an object is unchanged before deleting it. This conditional delete capability helps you improve data durability and reduce errors from accidental deletions in high-concurrency, multiple-writer scenarios.\n Conditional writes simplify how distributed applications with multiple clients concurrently update data across shared datasets, helping to prevent unintentional overwrites. Now, in directory buckets, clients can perform conditional delete checks on an object’s last modified time, size, and Etag using the x-amz-if-match-last-modified-time, x-amz-if-match-size, and HTTP if-match headers, respectively, in the DeleteObject and DeleteObjects API. S3 Express One Zone then evaluates if each of these object attributes match the value provided in these headers and prevents your clients from deleting the object until the condition is satisfied. You can use these headers in conjunction or individually in a delete request to reliably offload object-state evaluation to S3 Express One Zone and efficiently secure your distributed and highly parallelized workloads against unintended deletions. S3 Express One Zone support for conditional deletes is available at no additional charge in all AWS Regions where the storage class is available. You can use the S3 API, SDKs, and CLI to perform conditional deletes. To learn more, visit the S3 documentation.

Amazon Managed Service for Apache Flink now delivers to Amazon SQS queues

Today, AWS announced support for a new Apache Flink connector for Amazon Simple Queue Service. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Simple Queue Service as a new destination for Apache Flink. You can use the new connector to send processed data from Amazon Managed Service for Apache Flink to Amazon Simple Queue Service messages with Apache Flink, a popular framework and engine for processing and analyzing streaming data.\n Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors. Amazon Simple Queue Service offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as deal-letter queues and cosrt allocation tags. You can learn more about Amazon Managed Service for Apache Flink and Amazon Simple Queue Service in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Simple Queue Service region availability, refer to the AWS Region Table.

AWS Artifact enhances agreements with improved access control and tracking

We are excited to announce enhancements to the agreement functionality on AWS Artifact that will improve how you manage and track agreement execution.\n You can now provide fine-grained access to agreements in AWS Artifact at the AWS Identity and Access Management (IAM) Action and Resource level. To make it easy for you to configure IAM permissions, we have introduced “AWSArtifactAgreementsReadOnlyAccess”and “AWSArtifactAgreementsFullAccess” managed policies for AWS Artifact agreements, which provide read-only permissions and full permissions respectively. We have also implemented CloudTrail logging for agreement activities on AWS Artifact. This enables you to easily track and audit user activity and API calls related to agreements. To take advantage of the new features through Artifact console, please update your IAM policies and opt in to use the new fine-grained permissions by selecting that option on the Artifact Agreements console. We also introduced a new API called listCustomerAgreements that allows you to list active customer agreements for each AWS Account. This API enables automation and efficient tracking of active agreements for customers, especially for those managing a large number of accounts or complex compliance requirements. These features are available in all AWS commercial regions. To learn more about AWS Artifact and how to manage agreements, refer to the documentation and AWS Artifact API reference.

Amazon SageMaker launches Multi-Adapter Model Inference

Today, Amazon SageMaker introduces new multi-adapter inference capabilities that unlock exciting possibilities for customers using pre-trained language models. This feature allows you to deploy hundreds of fine-tuned LoRA (Low-Rank Adaptation) model adapters behind a single endpoint, dynamically loading the appropriate adapters in milliseconds based on the request. This enables you to efficiently host many specialized LoRA adapters built on a common base model, delivering high throughput and cost-savings compared to deploying separate models.\n With multi-adapter inference, you can quickly customize pre-trained models to meet diverse business needs. For example, marketing and SaaS companies can personalize AI/ML applications using each customer’s unique images, communication style, and documents to generate tailored content in seconds. Similarly, enterprises in industries like healthcare and financial services can reuse a common LoRA-powered base model to tackle a variety of specialized tasks, from medical diagnosis to fraud detection, by simply swapping in the appropriate fine-tuned adapter. This flexibility and efficiency unlocks new opportunities to deploy powerful, adaptable AI across your organization. The multi-adapter inference feature is generally available in: Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney, Jakarta), Canada (Central), Europe (Frankfurt, Stockholm, Ireland, London), Middle East (UAE), South America (Sao Paulo), US East (N. Virginia, Ohio), and US West (Oregon). To get started, refer to the Amazon SageMaker developer guide for information on using LoRA and managing model adapters.

AWS CodePipeline now supports publishing ECR image and AWS InspectorScan as new actions

AWS CodePipeline introduces the ECRBuildAndPublish action and the AWS InspectorScan action in its action catalog. The ECRBuildAndPublish action enables you to easily build a docker image and publish it to ECR as part of your pipeline execution. The InspectorScan action enables you to scan your source code repository or docker image as part of your pipeline execution.\n Previously, if you wanted to build and publish a docker image, or run vulnerability scan, you had to create a CodeBuild project, configure the project with the appropriate commands, and add a CodeBuild action to your pipeline to run the project. Now, you can simply add these actions to your pipeline, and let the pipeline handle the rest for you. To learn more about using the ECRBuildAndPublish action in your pipeline, visit our documentation. To learn more about using the InspectorScan action in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. These new actions are available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions.

Amazon Connect Contact Lens launches calibrations for agent performance evaluations

You can now perform calibrations to drive consistency and accuracy in how managers evaluate agent performance, so that agents receive feedback that is consistent. During a calibration, multiple managers can evaluate the same contact using the same evaluation form. You can then review differences in evaluations filled by different managers to align managers on evaluation best practices and identify opportunities to improve the evaluation form, e.g. rephrasing an evaluation question to be more specific, so that it is consistently answered by managers. You can also compare manager’s answers with an approved evaluation to measure and improve manager accuracy on evaluating agent performance.\n This feature is available in all regions where Contact Lens performance evaluations is already available. To learn more, please visit our documentation and our webpage. For information about Contact Lens pricing, please visit our pricing page.

Request future dated Amazon EC2 Capacity Reservations

Today, we are announcing that you can request Amazon EC2 Capacity Reservations to start on a future date. Capacity Reservations provide assurance for your critical workloads by allowing you to immediately reserve compute capacity in a specific Availability Zone. Starting today, you can now create Capacity Reservations to start on a future date, enabling you to secure capacity for your future needs and providing you with peace of mind for your critical future scaling events.\n You can create future dated Capacity Reservations by specifying the capacity you need, start date, and the minimum duration you commit to use the reservation. Once EC2 approves the request, your reservation will be scheduled to go active on the chosen start date and upon activation, you can immediately launch instances. This new capability is available to all Capacity Reservations customers in all AWS commercial regions, AWS China regions, and the AWS GovCloud (US) Regions at no additional cost. To learn more about these features, please refer to the Capacity Reservations user guide.

Announcing InlineAgents for Agents for Amazon Bedrock

Agents for Amazon Bedrock now offers InlineAgents, a new feature that allows developers to define and configure Bedrock Agents dynamically at runtime. This enhancement provides greater flexibility and control over agent capabilities, enabling users to specify foundation models, instructions, action groups, guardrails, and knowledge bases on-the-fly without relying on pre-configured control plane settings.\n With InlineAgents, developers can easily customize their agents for specific tasks or user requirements without creating new agent versions or preparing the agent. This feature enables rapid experimentation with different AI configurations, trying out various agent features and dynamically updating the tools available to an Agent without creating separate agents. InlineAgents is available through the new InvokeInlineAgent API in the Amazon Bedrock Agent Runtime service. This feature maintains full compatibility with existing Bedrock Agents while offering improved flexibility and ease of use. InlineAgents is now available in all AWS Regions where Agents Amazon Bedrock is supported. To learn more about InlineAgents and how to get started, see the Amazon Bedrock Developer Guide and the AWS SDK documentation for the InvokeInlineAgent API.

AWS AppConfig supports automatic rollback safety from third-party alerts

AWS AppConfig has added support for third-party monitors to trigger automatic rollbacks when there are problems with updates to feature flags, experimental flags, or configuration data. Customers can now connect AWS AppConfig to third-party application performance monitoring (APM) solutions; previously monitoring required Amazon CloudWatch. This monitoring gives more confidence and additional safety controls when making any change on production.\n Unexpected downtime or degraded performance can occur from faulty changes to feature flags or configuration data. AWS AppConfig provides safety guardrails to reduce this risk. One key safety guardrail for AWS AppConfig is the ability to have AWS AppConfig immediately roll back a change when a monitor alerts during the rollout of a feature flag or configuration change. This automation can typically remediate problems faster than a human operator can. Customers can use AWS AppConfig Extensions to connect to any API-enabled APM, including proprietary solutions. Third-party alarm rollback for AWS AppConfig is available in all AWS Regions, including the AWS GovCloud (US) Regions. To get started, use the AWS AppConfig Getting Started Guide, or learn about AWS AppConfig automatic rollback.

Amazon CloudWatch adds context to observability data in service consoles, accelerating analysis

Amazon CloudWatch now adds context to observability data, making it much easier for IT operators, application developers, and Site Reliability Engineers (SREs) to navigate related telemetry, visualize relationships between resources, and accelerate analysis. This new feature transforms disparate metrics and logs into real-time insights, to identify root cause of issues faster and improve operational efficiency.\n With this feature, Amazon CloudWatch now automatically visualizes the relationships within observability data and underlying AWS resources, such as Amazon EC2 instances and AWS Lambda functions. This feature is integrated across the AWS Management Console, accessible from multiple entry points including CloudWatch widgets, CloudWatch alarms, CloudWatch Application Signals, and CloudWatch Container Insights. Selecting this feature opens a side panel where you can explore and dive deeper into related metrics and logs all without leaving your current view. By selecting other metrics or resources of interest within the panel, you can streamline your troubleshooting process. This new capability is enabled by default in all commercial AWS Regions. To view and explore related telemetry and resources, we recommend updating to the latest version of Amazon CloudWatch agent. To learn more, visit the Amazon CloudWatch product page or view the documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Cloud Financial Management

AWS Compute Blog

AWS Contact Center

AWS Database Blog

Front-End Web & Mobile

AWS for Industries

AWS Machine Learning Blog

Networking & Content Delivery

AWS Storage Blog

Open Source Project

AWS CLI

AWS CDK

Amplify for JavaScript

Amplify for iOS