7/10/2024, 12:00:00 AM ~ 7/11/2024, 12:00:00 AM (UTC)
Recent Announcements
Amazon MWAA now available in nine additional Regions
Amazon Managed Workflows for Apache Airflow (MWAA) is now available in nine new AWS Regions: Asia Pacific (Jakarta), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Middle East (UAE), Europe (Spain), Europe (Zurich), Canada West (Calgary), Israel (Tel Aviv), and Asia Pacific (Osaka).\n Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure. Learn more about using Amazon MWAA on the product page. Please visit the AWS region table for more information on AWS regions and services. To learn more about Amazon MWAA visit the Amazon MWAA documentation. Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
AWS License Manager now integrates with Red Hat Subscription Manager
AWS License Manager now integrates with Red Hat Subscription Manager (RHSM) to provide greater insight into use of Red Hat Enterprise Linux (RHEL) on Amazon EC2. With instance and subscription data from RHSM accessible directly in License Manager, you can better manage cost optimization and compliance of your RHEL usage on AWS.\n You can already use License Manager to discover and track RHEL instances on Amazon EC2 launched from AWS provided Amazon Machine Images (AMIs). License Manager can now integrate with RHSM to show information about instances launched from custom RHEL images. The new feature will help customers discover RHEL instances and subscriptions in use on AWS, and identify cases of double payment where an instance has subscriptions purchased from both AWS and Red Hat assigned to it. This feature is available in all AWS Regions where AWS License Manager is available. To get started, visit the AWS License Manager console and select the Linux subscriptions tab in the left navigation. First time users will be directed to AWS License Manager settings to select the regions you want the Linux subscriptions data to be gathered from, set-up linking with AWS Organizations to see a cross-account view. Once this part of the process is completed you will be asked to provide your RHSM API token to complete the integration. To learn more, see the Linux subscriptions section in the AWS License Manager user guide.
Amazon QuickSight launches a 20x higher limit for SPICE JOIN
Amazon QuickSight is excited to announce an increase in the table size limit for joining SPICE datasets from 1GB to 20GB. Previously, when customers prepared their data and joined tables from various sources, including SPICE, the combined secondary tables had to be less than 1GB. This limitation often forced QuickSight customers to find workarounds in their upstream data pipeline to handle large datasets and build complex data models. With the new 20GB limit for secondary tables, users can now join SPICE tables with 20 times the previous capacity, significantly enhancing data preparation capabilities in QuickSight. This upgrade also enables large cross-source join tasks by leveraging SPICE ingestion. For further details, visit here.\n The new SPICE JOIN with 20GB limit is now available in Amazon QuickSight Enterprise Editions in all QuickSight regions - US East (N. Virginia and Ohio), US West (Oregon), Canada, São Paulo, Europe (Frankfurt, Stockholm, Paris, Ireland, London, Zurich and Milan), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo, Jakarta), Cape Town, and the AWS GovCloud (US-West) Region.
Guardrails for Amazon Bedrock can now detect hallucinations & safeguard apps using any FM
Guardrails for Amazon Bedrock enables customers to implement safeguards based on their application requirements and responsible AI policies. Today, guardrails adds contextual grounding checks and introduces a new ApplyGuardrail API to build trustworthy generative AI applications using any foundation model (FM).\n Customers rely on the inherent capabilities of the FMs to generate grounded (credible) responses that are based on company’s source data. However, FMs can conflate multiple pieces of information, producing incorrect or new information - impacting the reliability of the application. With contextual grounding checks, Guardrails can now detect hallucinations in model responses for RAG (retrieval-augmented generation) and conversational applications. This safeguard helps detect and filter responses that are factually incorrect based on a reference source, and are irrelevant to the users’ query. Customers can configure confidence thresholds to filter responses with low confidence of grounding or relevance. In addition, to support choice of safeguarding applications using different FMs, Guardrails now supports an ApplyGuardrail API to evaluate user inputs and model responses for any custom and third-party FM, in addition to FMs already supported in Amazon Bedrock. The ApplyGuardrail API now enables centralized safety and governance for all your generative AI applications. Guardrails is the only offering from a major cloud provider to provide safety, privacy, and truthfulness protections in a single solution. Contextual grounding check and ApplyGuardrail API are supported in all AWS regions where Guardrails for Amazon Bedrock is supported. To learn more about Guardrails for Amazon Bedrock, visit the feature page and read the news blog.
AWS Backup now supports Amazon Elastic Block Store (EBS) Snapshots Archive in backup policies
Today, AWS Backup announces support for Amazon EBS Snapshots Archive in backup policies, allowing customers to automatically move Amazon EBS Snapshots created by AWS Backup to Amazon EBS Snapshots Archive at the AWS Organizations level. Amazon EBS Snapshots Archive is low-cost, long-term storage tier meant for your rarely-accessed snapshots that do not need frequent retrieval. You can now use your Organizations’ management account to set an Amazon EBS Snapshots Archival policy across accounts.\n To get started, create a new or edit an existing AWS Backup policy from your AWS Organizations’ management account. You can use AWS Backup policies to transition your Amazon EBS Snapshots to Amazon EBS Snapshots Archive and manage their lifecycle, alongside AWS Backup’s other supported resources. Amazon EBS Snapshots are incremental, storing only the changes since the last snapshot and making them cost effective for daily and weekly backups that need to be accessed frequently. You may also have Amazon EBS snapshots that you only need to access every few months, retaining them for long-term regulatory requirements. For these long-term snapshots, you can now transition your Amazon EBS snapshots managed by AWS Backup to Amazon EBS Snapshots Archive tier to store full snapshots at lower costs. AWS Backup support for Amazon EBS Snapshots Archive in backup policies is available in all commercial and AWS GovCloud (US) Regions, where AWS Backup, AWS Backup policies and EBS Snapshots Archive are available. You can get started by using the AWS Organizations API, or CLI. For more information, visit our documentation.
Amazon Cognito is now available in Canada West (Calgary) Region
Starting today, customers can use Amazon Cognito in Canada West (Calgary) Region. Cognito makes it easy to add authentication, authorization, and user management to your web and mobile apps. The service scales to millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.\n With the addition of this region, Amazon Cognito is now available in 30 AWS Regions globally. For a list of regions where Amazon Cognito is available, see the AWS Region Table. To learn more about Amazon Cognito, visit the product documentation page. To get started, visit the Amazon Cognito home page.
Amazon Q Developer is now available in SageMaker Studio
Amazon SageMaker, a fully managed machine learning service, announces the general availability of Amazon Q Developer in SageMaker Studio. SageMaker Studio customers now get generative AI assistance powered by Q Developer right within their JupyterLab Integrated Development Environment (IDE). With Q Developer, data scientists and ML engineers can access expert guidance on SageMaker features, code generation, and troubleshooting. This allows for more productivity by eliminating the need for tedious online searches and documentation review, and ensuring more time delivering differentiated business value.\n Data scientists and ML engineers using JupyterLab in SageMaker Studio can kick off their model development lifecycle with Amazon Q Developer. They can use the chat capability to discover and learn how to leverage SageMaker features for their use case without having to sift through extensive documentation. As well, users can generate code tailored to their needs and jump-start the development process. Further, they can use Q Developer to get in-line code suggestions and conversational assistance to edit, explain, and document their code in JupyterLab. Users can also leverage Q Developer to receive step by step guidance for troubleshooting when running into errors. With the introduction of Q Developer, users can leverage generative AI assistance within their JupyterLab environment. This integration empowers data scientists and ML engineers to accelerate their workflow, enhance productivity, and deliver ML models more efficiently, streamlining the machine learning development process. This feature is available in all commercial AWS regions where SageMaker Studio is available. For additional details, see our product page and documentation.
Amazon Cognito is now available in Asia Pacific (Hong Kong) Region
Starting today, customers can use Amazon Cognito in Asia Pacific (Hong Kong) Region. Cognito makes it easy to add authentication, authorization, and user management to your web and mobile apps. The service scales to millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.\n With the addition of this region, Amazon Cognito is now available in 29 AWS Regions globally. For a list of regions where Amazon Cognito is available, see the AWS Region Table. To learn more about Amazon Cognito, visit the product documentation page. To get started, visit the Amazon Cognito home page.
Customize Amazon Q Developer code recommendations, and receive chat responses in the IDE (Preview)
Today, AWS announces the general availability of customized Amazon Q Developer inline code recommendations. You can now securely connect Amazon Q Developer to your private code bases and generate more precise suggestions by including your organization’s internal APIs, libraries, classes, methods, and best practices. In preview, you can also use Amazon Q Developer chat in the IDE to ask questions about how your internal code base is structured, where and how certain functions or libraries are used, or what specific functions, methods, or APIs do. With these capabilities, Amazon Q Developer can save builders hours typically spent examining previously written code or internal documentation to understand how to use internal APIs, libraries, and more. \n
To get started, you first need to securely connect your organization’s private repositories to Amazon Q Developer in the AWS Management Console. Amazon Q Developer administrators can select which repositories to use to customize recommendations, applying strict access control. Your administrators can decide which customization to activate, and they can manage access to a private customization from the console so only specific developers have access. Each customization is isolated from other customers, and none of the customizations built with these new capabilities will be used to train the foundation models underlying Amazon Q Developer.
Customized code recommendations and chat in the IDE are available as part of the Amazon Q Developer Pro subscription. To learn more about pricing, visit Amazon Q Developer Pricing. To learn more about these capabilities, see Amazon Q Developer or read the announcement blog post.
Agents for Amazon Bedrock now support code interpretation (Preview)
Amazon Web Services, Inc. (AWS) today announced a new code interpretation capability on Agents for Amazon Bedrock. Code interpretation allows agents to dynamically generate and execute code snippets within a secure sandboxed environment, extending the capabilities of Agents for complex use cases such as data analysis, data visualization, and optimization problems.\n This new capability allows developers to move beyond the predefined capabilities of the large language model (LLM) and tackle more complex, data-driven use cases. Agents can now generate and execute code, process files with diverse data types and formatting, and even generate graphs to enhance the user experience. Also, the iterative code execution capabilities allow Agents to work through challenging data science problems, giving them the ability to orchestrate increasingly complex tasks. Code interpretation is currently available in the Northern Virginia, Oregon, Europe (Frankfurt) AWS regions. Learn more here.
Agents for Amazon Bedrock now retain memory (Preview)
Amazon Web Services, Inc. (AWS) today announced Agents for Amazon Bedrock can retain memory across multiple interactions over time, allowing developers to build generative AI applications that seamlessly adapt to user context and preferences, enhancing personalized experiences and automating complex business processes more efficiently.\n By retaining memory AI assistants remember historical knowledge and learn from user interactions over time. For example, if a user is booking a flight, the application can remember the user’s travel preferences for future bookings. This capability is crucial for complex multi-step tasks like insurance claims processing, where continuity and context retention significantly improve the user experience. Memory retention is available in all AWS Regions where Claude 3 Sonnet and Haiku models support Agents for Amazon Bedrock. Learn more about memory retention on Agents for Amazon Bedrock here.
Knowledge Bases for Amazon Bedrock now supports advanced RAG capabilities
Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver relevant and accurate responses. Chunking allows processing long documents by breaking them into smaller chunks, enabling accurate knowledge retrieval from a user’s question. Today, we are launching advanced chunking options. The first is custom chunking. With this, customers can write their own chunking code as a Lambda function, and even use off the shelf components from frameworks like LangChain and LlamaIndex. Additionally, we are launching built-in chunking options such as semantic and hierarchical chunking.\n Additionally, customers can enable smart parsing to extract information from more complex data such as tables. This capability uses Amazon Bedrock foundation models to parse tabular content in file formats such as PDF to improve retrieval accuracy. You can customize parsing prompts to extract data in the format of your choice. Knowledge Bases now also supports query reformulation. This capability breaks down queries into simpler sub-queries, retrieves relevant information for each, and combines the results into a final comprehensive answer. With these new accuracy improvements for chunking, parsing, and advanced query handling, Knowledge Bases empowers users to build highly accurate and relevant knowledge resources suited for enterprise use cases. These capabilities are supported in the all AWS Regions where Knowledge Bases is available. To learn more about these features and how to get started, refer to the Knowledge Bases for Amazon Bedrock documentation and visit the Amazon Bedrock console.
Knowledge Bases for Amazon Bedrock now supports additional data sources (preview)
Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver relevant and accurate responses. Today, we are launching a new feature that allows customers to securely ingest data from various sources into their knowledge bases. Knowledge Bases now supports the web data source allowing you to index public web pages. Secondly, Knowledge Bases now supports three additional data connectors including Atlassian Confluence, Microsoft SharePoint, and Salesforce. You can connect directly to these data sources to build your RAG applications. These new capabilities reduce the time and cost associated with data movement, while ensuring that the knowledge bases stays up-to-date with the latest changes in the connected data sources.\n Customers can set up these new data sources through the AWS Management Console for Amazon Bedrock or the CreateDataSource API. To get started, visit the Knowledge Bases documentation. This capability is supported in the all AWS Regions where Knowledge Bases is available. To learn more about these features and how to get started, refer to the Knowledge Bases for Amazon Bedrock documentation and visit the Amazon Bedrock console.
Amazon Bedrock Prompt Management and Prompt Flows now available in preview
Today, we are announcing the preview launch of Amazon Bedrock Prompt Management and Prompt Flows. Amazon Bedrock Prompt Management simplifies the creation, evaluation, versioning, and sharing of prompts to help developers and prompt engineers get the best responses from foundation models for their use cases. Developers can use the Prompt Builder to experiment with multiple FMs, model configurations, and prompt messages. They can test and compare prompts in-place using the Prompt Builder, without the need of any deployment. To share the prompt for use in downstream applications, they can simply create a version and make an API call to retrieve the prompt. In addition, Bedrock Prompt Flows accelerates the creation, testing, and deployment of workflows through an intuitive visual builder. Developers can use the visual builder to drag and drop different components such as prompts, Knowledge Bases, and Lambda functions to automate a workflow.
Fine-tuning for Anthropic’s Claude 3 Haiku in Amazon Bedrock (Preview)
Fine-tuning for Anthropic’s Claude 3 Haiku model in Amazon Bedrock is now available in preview. Amazon Bedrock is the only fully managed service that provides you with the ability to fine tune Claude models. Claude 3 Haiku is Anthropic’s most compact model, and is one of the most affordable and fastest options on the market for its intelligence category according to Anthropic. By providing your own task-specific training dataset, you can fine tune and customize Claude 3 Haiku to boost model accuracy, quality, and consistency to further tailor generative AI for your business.\n Fine-tuning allows Claude 3 Haiku to excel in areas crucial to your business compared to more general models by encoding company and domain knowledge. Within your secure AWS environment, use Amazon Bedrock to customize Claude 3 Haiku with your own data to build applications specific to your domain, organization, and use case. By fine-tuning Haiku and adapting its knowledge to your exact business requirements, you can create unique user experiences that reflect your company’s proprietary information, brand, products, and more. You can also enhance performance for domain-specific actions such as classification, interactions with custom APIs, or industry-specific data interpretation. Amazon Bedrock makes a separate copy of the base foundation model that is accessible only by you and trains this private copy of the model. Fine-tuning for Anthropic’s Claude 3 Haiku in Amazon Bedrock is now available in preview in the US West (Oregon) AWS Region. To learn more, read the launch blog and documentation. To request to be considered for access to the preview of Anthropic’s Claude 3 Haiku fine-tuning in Amazon Bedrock, contact your AWS account team or submit a support ticket via the AWS Management Console. When creating the support ticket, select Bedrock as the Service and Models as the Category.
Announcing AWS App Studio preview
AWS App Studio, a generative artificial intelligence (AI)-powered service that uses natural language to create enterprise-grade applications, is now available in preview. App Studio opens up application development to technical professionals without software development skills (such as IT project managers, data engineers, and enterprise architects), empowering them to quickly build business applications, eliminating the need for operational expertise. This allows users to focus on building applications that help solve business problems and increase productivity in their roles, while removing the heavy lifting of building and running applications.\n App Studio is the fastest and easiest way for technical professionals to build enterprise-grade applications that were previously only built by professional developers. App Studio’s generative AI-powered assistant accelerates the application creation process. To get started, builders can write a basic prompt describing the application they want, App Studio will generate an outline to verify the user’s intent and then build an application with a multi-page UI, a data model, and business logic. Builders can then ask clarifying questions and App Studio will give detailed answers on how to make the change using the point-and-click interface. The user can also easily connect their application to internal data sources using built-in connectors for AWS (such as Amazon Aurora, Amazon DynamoDB, and Amazon S3) and Salesforce, along with hundreds of third-party services (such as HubSpot, Twilio, and Zendesk) using an API connector. With App Studio, users do not have to think about the underlying code at all—App Studio handles all the deployment, operations, and maintenance.
It is free to build with App Studio, and customers only pay for the time employees spend using the published applications, saving up to 80% compared to other low-code offerings.
App Studio is now available in preview in the US West (Oregon) AWS Region.
To learn more and get started, visit AWS App Studio, review the documentation, and read the announcement blog post.
AWS announces the general availability of vector search for Amazon MemoryDB
Vector search for Amazon MemoryDB, an in-memory database with multi-AZ durability, is now generally available. This capability helps you to store, index, retrieve, and search vectors. Amazon MemoryDB delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS. Vector search for MemoryDB supports storing millions of vectors with single-digit millisecond query and update latencies at the highest levels of throughput with >99% recall. You can generate vector embeddings using AI/ML services, such as Amazon Bedrock and Amazon SageMaker, and store them within MemoryDB. \n With vector search for MemoryDB, you can develop real-time machine learning (ML) and generative AI applications that require the highest throughput at the highest recall rates with the lowest latency using the MemoryDB API or orchestration frameworks such as LangChain. For example, a bank can use vector search for MemoryDB to detect anomalies, such as fraudulent transactions during periods of high transactional volumes, with minimal false positives.
Vector search for MemoryDB is available in all AWS Regions that MemoryDB is available—at no additional cost.
To get started, create a new MemoryDB cluster using MemoryDB version 7.1 and enable vector search through the AWS Management Console or AWS Command Line Interface (CLI). To learn more, check out the vector search for MemoryDB documentation.
Announcing the general availability of Amazon Q Apps
Today, AWS announces the general availability of Amazon Q Apps, an Amazon Q Business capability that has been in public preview since April 2024.\n Amazon Q Apps empowers organizational users to quickly turn their ideas into apps, all in a single step from their conversation with Amazon Q Business or by describing the app that they want to build in their own words. With Amazon Q Apps, users can effortlessly build, share, and customize apps on enterprise data to streamline tasks and boost individual and team productivity. Users can also publish apps to the admin-managed library and share them with their coworkers. Amazon Q Apps inherit user permissions, access controls, and enterprise guardrails from Amazon Q Business for secure sharing and adherence to data governance policies.
Amazon Q Apps enhances business user experience and collaboration with new and improved capabilities. Customers can now bring the power of Amazon Q Apps into their tools of choice and application environment through APIs that seamlessly allow creating and consuming Amazon Q Apps outputs. App creators can now review the original app creation prompt to refine and improve new app versions without starting from scratch, as well as to pick data sources to improve output quality.
Amazon Q Business and Amazon Q Apps are available in the US East (N. Virginia) and US West (Oregon) AWS Regions. For more information, check out Amazon Q Business and read the AWS News Blog.
AWS Blogs
AWS Japan Blog (Japanese)
AWS News Blog
- Vector search for Amazon MemoryDB is now generally available
- Build enterprise-grade applications with natural language using AWS App Studio (preview)
- Amazon Q Apps, now generally available, enables users to build their own generative AI apps
- Customize Amazon Q Developer (in your IDE) with your private code base
- Agents for Amazon Bedrock now support memory retention and code interpretation (preview)
- Guardrails for Amazon Bedrock can now detect hallucinations and safeguard apps built using custom or third-party FMs
- Knowledge Bases for Amazon Bedrock now supports additional data connectors (in preview)
- Introducing Amazon Q Developer in SageMaker Studio to streamline ML workflows
AWS Cloud Operations & Migrations Blog
AWS Big Data Blog
AWS Database Blog
AWS HPC Blog
AWS Machine Learning Blog
- Streamline generative AI development in Amazon Bedrock with Prompt Management and Prompt Flows (preview)
- Empowering everyone with GenAI to rapidly build, customize, and deploy apps securely: Highlights from the AWS New York Summit
- A progress update on our commitment to safe, responsible generative AI
- Fine-tune Anthropic’s Claude 3 Haiku in Amazon Bedrock to boost model accuracy and quality
AWS Security Blog
Open Source Project
AWS CLI
Amplify for Flutter
Amplify UI
- @aws-amplify/ui-vue@4.2.9
- @aws-amplify/ui-react-storage@3.1.4
- @aws-amplify/ui-react-notifications@2.0.21
- @aws-amplify/ui-react-native@2.2.3
- @aws-amplify/ui-react-liveness@3.1.1
- @aws-amplify/ui-react-geo@2.0.17
- @aws-amplify/ui-react-core-notifications@2.0.17
- @aws-amplify/ui-react-core@3.0.17
- @aws-amplify/ui-react@6.1.13
- @aws-amplify/ui-angular@5.0.17