11/29/2023, 12:00:00 AM ~ 11/30/2023, 12:00:00 AM (UTC)

Recent Announcements

Evaluate, compare, and select the best FMs for your use case in Amazon Bedrock (Preview)

Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined metrics such as accuracy, robustness, and toxicity. For subjective or custom metrics, such as friendliness, style, and alignment to brand voice, you can set up a human evaluation workflow with a few clicks. Human evaluation workflows can leverage your own employees or an AWS-managed team as reviewers. Model evaluation provides built-in curated datasets or you can bring your own datasets.

AWS announces OR1 for Amazon OpenSearch Service

Amazon OpenSearch Service introduces OR1, the OpenSearch Optimized Instance family, that delivers up to 30% price-performance improvement over existing instances in internal benchmarks and uses Amazon S3 to provide 11 9s of durability. The new OR1 instances are best suited for indexing-heavy workloads, and offers better indexing performance compared to the existing memory optimized instances available on OpenSearch Service.

Amazon SageMaker launches new inference capabilities to reduce costs and latency

We are excited to announce new capabilities on Amazon SageMaker which help customers reduce model deployment costs by 50% on average and achieve 20% lower inference latency on average. Customers can deploy multiple models to the same instance to better utilize the underlying accelerators. SageMaker actively monitors instances that are processing inference requests and intelligently routes requests based on which instances are available.

Announcing new AWS AI Service Cards - to advance responsible AI

We are excited to announce new AWS AI Service Cards, a resource to increase transparency and help customers better understand our AWS AI services, including how to use them in a responsible way. AI service cards are a form of responsible AI documentation that provides customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and best deployment and operation best practices for our AI Services. They are part of a comprehensive development process we undertake to build our services in a responsible way with fairness, explainability, veracity and robustness, governance, transparency, privacy andsecurity, safety, and controllability.

AWS Clean Rooms ML is now available in preview

AWS Clean Rooms ML (Preview) helps you and your partners apply privacy-enhancing ML to generate predictive insights without having to share raw data with each other. The capability’s first model is specialized to help companies create lookalike segments. With AWS Clean Rooms ML lookalike modeling, you can train your own custom model using your data, and invite your partners to bring a small sample of their records to a collaboration to generate an expanded set of similar records while protecting you and your partner’s underlying data. Healthcare modeling will be available in the coming months.

Llama 2 70B foundation model from Meta is now available in Amazon Bedrock

You can now access Meta’s Llama 2 model 70B in Amazon Bedrock. The Llama 2 70B model now joins the already available Llama 2 13B model in Amazon Bedrock. Llama 2 models are next generation large language models (LLMs) provided by Meta. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies, like Meta, along with a broad set of capabilities that provide you with the easiest way to build and scale generative AI applications with foundation models.

Announcing smart sifting of data for Amazon SageMaker Model Training in preview

Today, we’re excited to announce the preview of a new smart sifting capability of Amazon SageMaker that automatically inspects and evaluates training data on-the-fly to selectively learn from only the most informative data samples, reducing model training time and cost by up to 35%. You can get started with smart data sifting in minutes without making changes to your existing data pipelines or training scripts.

Claude 2.1 foundation model from Anthropic is now generally available in Amazon Bedrock

Anthropic’s Claude 2.1 foundation model is now generally available in Amazon Bedrock. Claude 2.1 delivers key capabilities for enterprises, such as an industry-leading 200,000 token context window (2x the context of Claude 2.0), reduced rates of hallucination, improved accuracy over long documents, system prompts, and a beta tool use feature for function calling and workflow orchestration. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies, like Anthropic, along with a broad set of capabilities that provide you with the easiest way to build and scale generative AI applications with foundation models.

Amazon SageMaker Clarify now supports foundation model (FM) evaluations in preview

Today, Amazon SageMaker Clarify announces a new capability to support foundation model (FM) evaluations. AWS customers can compare, and select FMs based on metrics such as accuracy, robustness, bias, and toxicity, in minutes.

SageMaker now provides improved SDK tooling and UX for model deployment

We are excited to announce new tools and improvements that enable customers to reduce the time from days to hours to deploy machine learning (ML) models including foundation models (FMs) on Amazon SageMaker for Inference at scale . This includes a new Python SDK library that simplifies the process of packaging and deploying a ML model on SageMaker from seven steps to one with an option to do local inference. Further, Amazon SageMaker is offering new interactive UI experiences in Amazon SageMaker Studio that will help customers quickly deploy their trained ML model or FMs using performant and cost-optimized configurations in as few as three clicks.

Announcing Amazon SageMaker HyperPod, a purpose-built infrastructure for distributed training at scale

Today, AWS announces the general availability of Amazon SageMaker HyperPod, which reduces time to train foundation models (FMs) by up to 40% by providing purpose-built infrastructure for distributed training at scale.

Stable Diffusion XL 1.0 foundation model from Stability AI is now generally available in Amazon Bedrock

Stability AI’s Stable Diffusion XL 1.0 (SDXL 1.0) foundation model is now generally available on-demand in Amazon Bedrock. SDXL 1.0 is the most advanced development in the Stable Diffusion text-to-image suite of models launched by Stability AI. The model generates images of high quality in virtually any art style and it excels at photorealism. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies, like Stability AI, along with a broad set of capabilities that provide you with the easiest way to build and scale generative AI applications with foundation models.

Amazon SageMaker Pipelines now provide a simplified developer experience for AI/ML workflows

Today, we are excited to announce the general availability of a simplified developer experience for Amazon SageMaker Pipelines. The improved Python SDK enables you to build Machine Learning (ML) workflows quickly with familiar Python syntax. Key features of the SDK include a new Python decorator (@step) for custom steps, a Notebook Jobs step type, and a workflow scheduler.

Leverage FMs for business analysis at scale with Amazon SageMaker Canvas

Amazon SageMaker Canvas is a no-code tool to build ML models and generate machine learning (ML) predictions. As announced on October 5, customers can access and evaluate foundation models (FMs) from Amazon Bedrock and SageMaker JumpStart to generate and summarize content.

AWS announces vector search for Amazon DocumentDB

Amazon DocumentDB (with MongoDB compatibility) now supports vector search, a new capability that enables you to store, index, and search millions of vectors with millisecond response times. Vectors are numerical representations of unstructured data, such as text, created from machine learning (ML) models that help capture the semantic meaning of the underlying data. Vector search for Amazon DocumentDB can store vectors from Amazon Bedrock, Amazon SageMaker, and more. There are no upfront commitments or additional costs to use vector search, and you only pay for the data you store and compute resources you use.

Amazon SageMaker Canvas now supports natural language instructions for data preparation

Amazon SageMaker Canvas now supports natural language instructions for data exploration, visualization, and preparation to build machine learning (ML) models. Amazon SageMaker Canvas is a no-code tool that enables customers to easily create highly accurate ML models without writing a line of code. Starting today, you can use FM-powered natural language instructions enabled by Amazon Bedrock for data preparation. This new capability enables you to interact with your data, ask questions, visualize feature distribution and correlations, and transform data to the right structure for your business problems, using natural language queries.

Amazon Bedrock now supports batch inference

You can now use Amazon Bedrock to process prompts in batch to get responses for model evaluation, experimentation, and offline processing.

Announcing API support for creating Amazon SageMaker Notebook jobs

Amazon SageMaker notebook jobs allows data scientists to run their notebooks on demand or on a schedule with a few clicks on Amazon SageMaker Studio, a web-based IDE for machine learning (ML). Today, we’re excited to announce that you can programmatically run notebooks as jobs using APIs provided by SageMaker Pipelines, SageMaker’s ML workflow orchestration service. Furthermore, you can create a multi-step ML workflow with multiple dependent notebooks using these APIs.

Amazon OpenSearch Service zero-ETL integration with Amazon S3 preview now available

Amazon OpenSearch Service zero-ETL integration with Amazon S3, a new way for customers to query operational logs in Amazon S3 and S3-based data lakes without needing to switch between tools to analyze operational data, is available for customer preview. Customers can boost the performance of their queries and build fast-loading dashboards using the built-in query acceleration capabilities of Amazon OpenSearch Service zero-ETL integration with Amazon S3.

Amazon Q generative SQL is now available in Amazon Redshift Query Editor (preview)

Amazon Redshift introduces Amazon Q generative SQL in Amazon Redshift Query Editor, an out-of-the-box web-based SQL editor for Redshift, to simplify query authoring and increase your productivity by allowing you to express queries in natural language and receive SQL code recommendations. Furthermore, it allows you to get insights faster without extensive knowledge of your organization’s complex database metadata.

AWS announces vector search for Amazon MemoryDB for Redis (Preview)

Amazon MemoryDB for Redis now supports vector search in preview, a new capability that enables you to store, index, and search vectors. MemoryDB is a database that combines in-memory performance with multi-AZ durability. With vector search for MemoryDB, you can develop real-time machine learning (ML) and generative AI applications with the highest performance demands using the popular, open-source Redis API. Vector search for MemoryDB supports storing millions of vectors, with single-digit millisecond query and update response times, and tens of thousands queries per second (QPS) at greater than 99% recall. You can generate vector embeddings using AI/ML services like Amazon Bedrock and SageMaker, and store them within MemoryDB.

Amazon Titan Multimodal Embeddings foundation model now generally available in Amazon Bedrock

Amazon Titan Multimodal Embeddings helps customers power more accurate and contextually relevant multimodal search, recommendation, and personalization experiences for end users. You can now access the Amazon Titan Multimodal Embeddings foundation model in Amazon Bedrock.

Amazon Titan Text models—Express and Lite—now generally available in Amazon Bedrock

Amazon Titan Text Express and Amazon Titan Text Lite are large language models (LLMs) that help customers improve productivity and efficiency for an extensive range of text-related tasks, and offer price and performance options that are optimized for your needs. You can now access these Amazon Titan Text foundation models in Amazon Bedrock, which helps you easily build and scale generative AI applications with new text generation capabilities.

Vector engine for Amazon OpenSearch Serverless now generally available

Today, AWS announces the general availability of vector engine for Amazon OpenSearch Serverless. Vector engine for OpenSearch Serverless is a simple, scalable, and high-performing vector database which makes it easier for developers to build machine learning (ML)–augmented search experiences and generative artificial intelligence (AI) applications without having to manage the underlying vector database infrastructure. Developers can rely on the vector engine’s cost-efficient, secure, and mature serverless platform to seamlessly transition from application prototyping to production.

Amazon Neptune Analytics is now generally available

Today, AWS announces the general availability of Amazon Neptune Analytics, a new analytics database engine. Neptune Analytics makes it faster for data scientists and application developers to get insights and find trends by analyzing graph data with tens of billions of connections in seconds. Neptune Analytics adds to existing Neptune tools and services such as Amazon Neptune Database, Amazon Neptune ML, and visualization tools. Neptune is a fast, reliable, and fully managed graph database service for building and running applications with highly connected datasets, such as knowledge graphs, fraud graphs, identity graphs, and security graphs. With Neptune Analytics, you can find insights in graph data up to 80x faster by analyzing your existing Neptune graph database or graph data from a data lake such as Amazon S3.

AWS Clean Rooms Differential Privacy is now available in preview

Today, AWS announces the preview release of AWS Clean Rooms Differential Privacy, a new capability that helps you protect the privacy of your users with mathematically-backed and intuitive controls in a few clicks. As a fully managed capability, no prior differential privacy experience is needed to help you prevent the re-identification of your users.

Amazon Redshift announces general availability of support for Apache Iceberg

Today, Amazon Redshift announces the general availability of support for Apache Iceberg tables. Now, you can easily access your Apache Iceberg tables on your data lake and join it with the data in your data warehouse. This capability offers increased performance whether you are accessing your data lake tables using auto-mounted AWS Glue catalog or external schemas.

Announcing feature development capability of Amazon Q (Preview) in Amazon CodeCatalyst

Today, we are excited to announce the availability of Amazon Q’s feature development capability in preview, in Amazon CodeCatalyst. With this new capability, developers can assign a CodeCatalyst issue to Amazon Q, and Q performs the heavy lifting of converting a human prompt to an actionable plan, then completes code changes and a pull request that is assigned to the requester. Q will then monitor any associated workflows and attempt to correct any issues. The user can preview code changes and merge the pull request. Development teams can utilize this new capability as an end-to-end, streamlined experience within Amazon CodeCatalyst, without having to enter the IDE.

Amazon Titan Image Generator foundation model in Amazon Bedrock now available in preview

Amazon Titan Image Generator enables content creators with rapid ideation and iteration resulting in high efficiency image generation. You can now access the Amazon Titan Image Generator foundation model in Amazon Bedrock in preview, which helps you easily build and scale generative AI applications with new image generation and image editing capabilities.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

AWS Machine Learning Blog

Networking & Content Delivery

AWS Security Blog

Open Source Project

AWS CLI

Karpenter