5/17/2024, 12:00:00 AM ~ 5/20/2024, 12:00:00 AM (UTC)
Recent Announcements
Knowledge Bases for Amazon Bedrock now lets you configure Guardrails
Knowledge Bases for Amazon Bedrock (KB) securely connects foundation models (FMs) to internal company data sources for Retrieval Augmented Generation (RAG), to deliver more relevant and accurate responses. We are excited to announce Guardrails for Amazon Bedrock is integrated with Knowledge Bases. Guardrails allow you to instrument safeguards customized to your RAG application requirements, and responsible AI policies, leading to a better end user experience. Guardrails provides a comprehensive set of policies to protect your users from undesirable responses and interactions with a generative AI application. First, you can customize a set of denied topics to avoid within the context of your application. Second, you can filter content across prebuilt harmful categories such as hate, insults, sexual, violence, misconduct, and prompt attacks. Third, you can define a set of offensive and inappropriate words to be blocked in their application. Finally, you can filter user inputs containing sensitive information (e.g., personally identifiable information) or redact confidential information in model responses based on use cases. Guardrails can be applied to the input sent to the model as well the content generated by the foundation model. This capability within Knowledge Bases is now available in Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), US East (N. Virginia), US West (Oregon) regions. To learn more, refer to Knowledge Bases for Amazon Bedrock documentation. To get started, visit the Amazon Bedrock console or utilize the RetrieveAndGenerate API.
Amazon RDS for MySQL announces Extended Support minor 5.7.44-RDS.20240408
Amazon Relational Database Service (RDS) for MySQL announces Amazon RDS Extended Support minor version 5.7.44-RDS.20240408. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of MySQL. Learn more about upgrading your database instances, including minor and major version upgrades, in the Amazon RDS User Guide. Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL and PostgreSQL databases on Aurora and RDS after the community ends support for a major version. You can run your MySQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date. Learn more about Extended Support in the Amazon RDS User Guide and the Pricing FAQs. Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. See Amazon RDS for MySQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Knowledge Bases for Amazon Bedrock now lets you configure inference parameters
We are excited to announce that Knowledge Bases for Amazon Bedrock (KB) now lets you configure inference parameters to have greater control over personalizing the responses generated by a foundation model (FM). With this launch you can optionally set inference parameters to define parameters such randomness and length of the response generated by the foundation model. You can control how random or diverse the generated text is by adjusting a few settings, such as temperature and top-p. The temperature setting makes the model more or less likely to choose unusual or unexpected words. A lower value for temperature generates expected and more common word choices. The top-p setting limits how many word options the model considers. Reducing this number restricts the consideration to a smaller set of word choices makes the output more conventional. In addition to randomness and diversity, you can restrict the length of the foundation model output, through maxTokens, and stopsequences. You can use the maxTokens setting to specify the minimum or maximum number of tokens to return in the generated response. Finally, the stopsequences setting allows you to configure strings that serve as control for the model to stop generating further tokens. The inference parameters capability within Knowledge Bases is now available in Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), US East (N. Virginia), US West (Oregon) regions. To learn more, refer to Knowledge Bases for Amazon Bedrock documentation. To get started, visit the Amazon Bedrock console or utilize the RetrieveAndGenerate API.
AWS HealthImaging now supports retrieval of DICOM Part 10 instances
AWS HealthImaging now supports retrieval of DICOM Part 10 data, enabling customers to download instance-level binaries. The retrieve DICOM instance API is built in conformance to the DICOMweb WADO-RS standard for web-based medical imaging. With this feature launch, customers taking advantage of HealthImaging’s cloud-native interfaces can better interoperate with systems that utilize DICOM Part 10 binaries. You can retrieve a DICOM instance from a HealthImaging data store by specifying the Series, Study, and Instance UIDs associated with the resource. You can also provide an optional image set ID as a query parameter to specify the image set from which the instance resource should be retrieved. Customers can specify the Transfer Syntax, such as uncompressed (ELE) or compressed (High-throughput JPEG 2000). To learn more about how to retrieve DICOM P10 binaries, see the AWS HealthImaging Developer Guide. AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing total cost of ownership. AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). To learn more, visit AWS HealthImaging.
Amazon MSK now supports the removal of brokers from MSK provisioned clusters
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports removing brokers from MSK provisioned clusters. Administrators can optimize costs of their Amazon MSK clusters by reducing broker count to meet the changing needs of their streaming workloads, while maintaining cluster performance, availability, and data durability. Customers use Amazon MSK as the core foundation to build a variety of real-time streaming applications and high-performance event driven architectures. As their business needs and traffic patterns change, they often adjust their cluster capacity to optimize their costs. Amazon MSK Provisioned provides flexibility for customers to change their provisioned clusters by adding brokers or changing the instance size and type. With broker removal, Amazon MSK Provisioned now offers an additional option to right-size cluster capacity. Customers can remove multiple brokers from their MSK provisioned clusters to meet the varying needs of their streaming workloads without any impact to client connectivity for reads and writes. By using broker removal capability, administrators can adjust cluster’s capacity, eliminating the need to migrate to another cluster to reduce broker count. Brokers can be removed from Amazon MSK provisioned clusters configured with M5 or M7g instance types. The feature is available in all AWS Regions where MSK Provisioned is supported. To learn more, visit our launch blog and the Amazon MSK Developer Guide.
Bottlerocket now supports NVIDIA Fabric Manager for Multi-GPU Workloads
Today, AWS has announced that Bottlerocket, the Linux-based operating system purpose-built for containers, now supports NVIDIA Fabric Manager, enabling users to harness the power of multi-GPU configurations for their AI and machine learning workloads. With this integration, Bottlerocket users can now seamlessly leverage their connected GPUs as a high-performance compute fabric, enabling efficient and low-latency communication between all the GPUs in each of their P4/P5 instances. The growing sophistication of deep learning models has led to an exponential increase in the computational resources required to train them within a reasonable timeframe. To address this increase in computational demands, customers running AI and machine learning workloads have turned to multi-GPU implementations, leveraging NVIDIA’s NVSwitch and NVLink technologies to create a unified memory fabric across connected GPUs. The Fabric Manager support in the Bottlerocket NVIDIA variants allows users to configure this fabric, enabling all GPUs to be used as a single, high-performance pool rather than individual units. This unlocks Bottlerocket users to run multi-GPU setups on P4/P5 instances, significantly accelerating the training of complex neural networks. To learn more about Fabric Manager support in the Bottlerocket NVIDIA variants, please visit the official Bottlerocket GitHub repo.
AWS Blogs
AWS Japan Blog (Japanese)
- The “Considerations Related to the Economic Security Promotion Law in AWS” white paper has been published.
- Develop private, secure, enterprise-generated AI applications using Amazon Q Business and AWS IAM Identity Center
- Meta’s Llama 3 model is now available on Amazon Bedrock
- Build RAG and agent-based generative AI applications using the new Amazon Titan Text Premier model now available on Amazon Bedrock
- Building Generative AI Applications Using Amazon Bedrock Studio (preview)
- Amazon Aurora MySQL Version 2 (MySQL 5.7 Compatible) to Version 3 (MySQL 8.0 Compatible) Checklist, Part 2
- Amazon Aurora MySQL Version 2 (MySQL 5.7 Compatible) to Version 3 (MySQL 8.0 Compatible) Checklist, Part 1
- How healthcare organizations use AWS-generated AI to transform data into better patient outcomes
- Amazon Bedrock Agent: Introducing a simple creation and configuration experience
- Overcoming CNAME Chain Challenges: Simplifying Management with Route 53 Resolver DNS Firewall
AWS Cloud Operations & Migrations Blog
- Planning Migrations to successfully incorporate Generative AI
- Event Driven Architecture using Amazon EventBridge – Part 1
- How to automate application log ingestion from Amazon EKS on Fargate into AWS CloudTrail Lake
Business Productivity
AWS Database Blog
Integration & Automation
AWS Machine Learning Blog
- Mixtral 8x22B is now available in Amazon SageMaker JumpStart
- Building Generative AI prompt chaining workflows with human in the loop