12/4/2024, 12:00:00 AM ~ 12/5/2024, 12:00:00 AM (UTC)
Recent Announcements
Amazon Bedrock Knowledge Bases now supports structured data retrieval
Amazon Bedrock Knowledge Bases now supports natural language querying to retrieve structured data from your data sources. With this launch, Bedrock Knowledge Bases offers an end-to-end managed workflow for customers to build custom generative AI applications that can access and incorporate contextual information from a variety of structured and unstructured data sources. Using advanced natural language processing, Bedrock Knowledge Bases can transform natural language queries into SQL queries, allowing users to retrieve data directly from the source without the need to move or preprocess the data.\n Developers often face challenges integrating structured data into generative AI applications. This includes difficulties training large language models (LLMs) to convert natural language queries to SQL queries based on complex database schemas, as well as ensuring appropriate data governance and security controls are in place. Bedrock Knowledge Bases eliminates these hurdles by providing a managed natural language to SQL (NL2SQL) module. A retail analyst can now simply ask “What were my top 5 selling products last month?”, and then Bedrock Knowledge Base automatically translates that query into SQL, execute the query against the database, and return the results - or even provide a summarized narrative response. To generate accurate SQL queries, Bedrock Knowledge Base leverages database schema, previous query history, and other contextual information that are provided about the data sources. Bedrock Knowledge Bases supports structured data retrieval from Amazon Redshift and Amazon Sagemaker Lakehouse at this time and is available in all commercial regions where Bedrock Knowledge Bases is supported. To learn more, visit here and here. For details on pricing, please refer here.
AWS announces Amazon SageMaker Partner AI Apps
Today Amazon Web Services, Inc. (AWS) announced the general availability of Amazon SageMaker partner AI apps, a new capability that enables customers to easily discover, deploy, and use best-in-class machine learning (ML) and generative AI (GenAI) development applications from leading app providers privately and securely, all without leaving Amazon SageMaker AI so they can develop performant AI models faster.\n Until today, integrating purpose-built GenAI and ML development applications that provide specialized capabilities for a variety of model development tasks, required a considerable amount of effort. Beyond the need to invest time and effort in due diligence to evaluate existing offerings, customers had to perform undifferentiated heavy lifting in deploying, managing, upgrading and scaling these applications. Furthermore, to adhere to rigorous security and compliance protocols, organizations need their data to stay within the confines of their security boundaries without needing to move their data elsewhere, for example, to a Software as a Service (SaaS) application. Finally, the resulting developer experience is often fragmented, with developers having to switch back and forth between multiple disjointed interfaces. With SageMaker partner AI apps you can quickly subscribe to a partner solution and seamlessly integrate the app with your SageMaker development environment. SageMaker partner AI apps are fully managed and run privately and securely in your SageMaker environment reducing the risk of data and model exfiltration. At launch, you will be able to boost your team’s productivity and reduce time to market by enabling: Comet, to track, visualize, and manage experiments for AI model development; Deepchecks, to evaluate quality and compliance for AI models; Fiddler, to validate, monitor, analyze, and improve AI models in production; and, Lakera, to protect AI applications from security threats such as prompt attacks, data loss and inappropriate content. SageMaker partner AI apps is available in all currently supported regions except Gov Cloud. To learn more please visit SageMaker partner AI app’s developer guide.
Amazon SageMaker HyperPod now provides flexible training plans
Amazon SageMaker HyperPod announces flexible training plans, a new capability that allows you to train generative AI models within your timelines and budgets. Gain predictable model training timelines and run training workloads within your budget requirements, while continuing to benefit from features of SageMaker HyperPod such as resiliency, performance-optimized distributed training, and enhanced observability and monitoring. \n In a few quick steps, you can specify your preferred compute instances, desired amount of compute resources, duration of your workload, and preferred start date for your generative AI model training. SageMaker then helps you create the most cost-efficient training plans, reducing time to train your model by weeks. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the training workloads on these compute resources without requiring any manual intervention. SageMaker also automatically takes care of pausing and resuming training between gaps in compute availability, as the plan switches from one capacity block to another. If you wish to remove all the heavy lifting of infrastructure management, you can also create and run training plans using SageMaker fully managed training jobs.
SageMaker HyperPod flexible training plans are available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. To learn more, visit: SageMaker HyperPod, documentation, and the announcement blog.
Amazon Bedrock Marketplace brings over 100 models to Amazon Bedrock
Amazon Bedrock Marketplace provides generative AI developers access to over 100 publicly available and proprietary foundation models (FMs), in addition to Amazon Bedrock’s industry-leading, serverless models. Customers deploy these models onto SageMaker endpoints where they can select their desired number of instances and instance types. Amazon Bedrock Marketplace models can be accessed through Bedrock’s unified APIs, and models which are compatible with Bedrock’s Converse APIs can be used with Amazon Bedrock’s tools such as Agents, Knowledge Bases, and Guardrails.\n Amazon Bedrock Marketplace empowers generative AI developers to rapidly test and incorporate a diverse array of emerging, popular, and leading FMs of various types and sizes. Customers can choose from a variety of models tailored to their unique requirements, which can help accelerate the time-to-market, improve the accuracy, or reduce the cost of their generative AI workflows. For example, customers can incorporate models highly-specialized for finance or healthcare, or language translation models for Asian languages, all from a single place.
Amazon Bedrock Marketplace is supported in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo). For more information, please refer to Amazon Bedrock Marketplace’s announcement blog or documentation.
AWS Education Equity Initiative to boost education for underserved learners
Amazon announces a five-year commitment of cloud technology and technical support for organizations creating digital learning solutions that expand access for underserved learners worldwide through the AWS Education Equity Initiative. While the use of educational technologies continues to rise, many organizations lack access to cloud computing and AI resources needed to accelerate and scale their work to reach more learners in need.\n Amazon is committing up to $100 million in AWS credits and technical advising to support socially-minded organizations build and scale learning solutions that utilize cloud and AI technologies. This will help reduce initial financial barriers and provide guidance on building and scaling AI-powered education solutions using AWS technologies. Eligible recipients, including socially-minded edtechs, social enterprises, non-profits, governments, and corporate social responsibility teams, must demonstrate how their solution will benefit students from underserved communities. The initiative is now accepting applications. To learn more and how to apply, visit the AWS Education Equity Initiative page.
Task governance is now generally available for Amazon SageMaker HyperPod
Amazon SageMaker HyperPod now provides you with centralized governance across all generative AI development tasks, such as training and inference. You have full visibility and control over compute resource allocation, ensuring the most critical tasks are prioritized and maximizing compute resource utilization, reducing model development costs by up to 40%.\n With HyperPod task governance, administrators can more easily define priorities for different tasks and set up limits for how many compute resources each team can use. At any given time, administrators can also monitor and audit the tasks that are running or waiting for compute resources through a visual dashboard. When data scientists create their tasks, HyperPod automatically runs them, adhering to the defined compute resource limits and priorities. For example, when training for a high-priority model needs to be completed as soon as possible but all compute resources are in use, HyperPod frees up resources from lower-priority tasks to support the training. HyperPod pauses the low-priority task, saves the checkpoint, and reallocates the freed-up compute resources. The preempted low-priority task will resume from the last saved checkpoint as resources become available again. And when a team is not fully using the resource limits the administrator has set up, HyperPod use those idle resources to accelerate another team’s tasks. Additionally, HyperPod is now integrated with Amazon SageMaker Studio, bringing task governance and other HyperPod capabilities into the Studio environment. Data scientists can now seamlessly interact with HyperPod clusters directly from Studio, allowing them to develop, submit, and monitor machine learning (ML) jobs on powerful accelerator-backed clusters. Task governance for HyperPod is available in all AWS Regions where HyperPod is available: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo). To learn more, visit SageMaker HyperPod webpage, AWS News Blog, and SageMaker AI documentation.
Amazon Bedrock Guardrails supports multimodal toxicity detection for image content (Preview)
Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious.\n Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies. This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West). To learn more, visit the Amazon Bedrock Guardrails product page, read the News blog, and documentation.
Announcing new AWS AI Service Cards to advance responsible generative AI
Today, AWS announces the availability of new AWS AI Service Cards for Amazon Nova Reel; Amazon Canvas; Amazon Nova Micro, Lite, and Pro; Amazon Titan Image Generator; and Amazon Titan Text Embeddings. AI Service Cards are a resource designed to enhance transparency by providing customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for AWS AI services.\n AWS AI Service Cards are part of our comprehensive development process to build services in a responsible way. They focus on key aspects of AI development and deployment, including fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. By offering these cards, AWS aims to empower customers with the knowledge they need to make informed decisions about using AI services in their applications and workflows. Our AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach. For more information, see the AI Service Cards for
Amazon Nova Reel
Amazon Nova Canvas
Amazon Nova Micro, Lite and Pro
Amazon Titan Image Generator
Amazon Titan Text Embeddings
To learn more about AI Service Cards, as well as our broader approach to building AI in a responsible way, see our Responsible AI webpage.
Amazon Bedrock announces preview of prompt caching
Today, AWS announces that Amazon Bedrock now supports prompt caching. Prompt caching is a new capability that can reduce costs by up to 90% and latency by up to 85% for supported models by caching frequently used prompts across multiple API calls. It allows you to cache repetitive inputs and avoid reprocessing context, such as long system prompts and common examples that help guide the model’s response. When cache is used, fewer computing resources are needed to generate output. As a result, not only can we process your request faster, but we can also pass along the cost savings from using fewer resources.\n Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while providing tools to build customer trust and data governance. Prompt caching is now available on Claude 3.5 Haiku and Claude 3.5 Sonnet v2 in US West (Oregon) and US East (N. Virginia) via cross-region inference, and Nova Micro, Nova Lite, and Nova Pro models in US East (N. Virginia). At launch, only a select number of customers will have access to this feature. To learn more about participating in the preview, see this page. To learn more about prompt caching, see our documentation and blog.
Amazon Q Developer can now guide SageMaker Canvas users through ML development
Starting today, you can build ML models using natural language with Amazon Q Developer, now available in Amazon SageMaker Canvas in preview. You can now get generative AI-powered assistance through the ML lifecycle, from data preparation to model deployment. With Amazon Q Developer, users of all skill levels can use natural language to access expert guidance to build high-quality ML models, accelerating innovation and time to market.\n Amazon Q Developer will break down your objective into specific ML tasks, define the appropriate ML problem type, and apply data preparation techniques to your data. Amazon Q Developer then guides you through the process of building, evaluating, and deploying custom ML models. ML models produced in SageMaker Canvas with Amazon Q Developer are production ready, can be registered in SageMaker Studio, and the code can be shared with data scientists for integration into downstream MLOps workflows. Amazon Q Developer is available in SageMaker Canvas in preview in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Paris), Asia Pacific (Tokyo), and Asia Pacific (Seoul). To learn more about using Amazon Q Developer with SageMaker Canvas, visit the website, read the AWS News blog, or view the technical documentation.
Amazon Bedrock Data Automation now available in preview
Today, we are announcing the preview launch of Amazon Bedrock Data Automation (BDA), a new feature of Amazon Bedrock that enables developers to automate the generation of valuable insights from unstructured multimodal content such as documents, images, video, and audio to build GenAI-based applications. These insights include video summaries of key moments, detection of inappropriate image content, automated analysis of complex documents, and much more. Developers can also customize BDA’s output to generate specific insights in consistent formats required by their systems and applications.\n By leveraging BDA, developers can reduce development time and effort, making it easier to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions. BDA offers high accuracy at lower cost than alternative solutions, along with features such as visual grounding with confidence scores for explainability and built-in hallucination mitigation. This ensures accurate insights from unstructured, multi-modal data content. Developers can get started with BDA on the Bedrock console, where they can configure and customize output using their sample data. They can then integrate BDA’s unified multi-modal inference API into their applications to process their unstructured content at scale with high accuracy and consistency. BDA is also integrated with Bedrock Knowledge Bases, making it easier for developers to generate meaningful information from their unstructured multi-modal content to provide more relevant responses for retrieval augmented generation (RAG). Bedrock Data Automation is available in preview in US West (Oregon) AWS Region. To learn more, visit the Bedrock Data Automation page.
Amazon Bedrock Knowledge Bases now supports GraphRAG (preview)
Today, we are announcing the support of GraphRAG, a new capability in Amazon Bedrock Knowledge Bases that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company’s data sources. Amazon Bedrock Knowledge Bases now offers a fully-managed GraphRAG capability with Amazon Neptune Analytics.\n Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, users can ask a travel application for family-friendly beach destinations with direct flights and good seafood restaurants. Developers building Generative AI applications can enable GraphRAG in just a few clicks by specifying their data sources and choosing Amazon Neptune Analytics as their vector store when creating a knowledge base. This will automatically generate and store vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships. GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.
Announcing Amazon SageMaker HyperPod recipes
Amazon SageMaker HyperPod recipes help you get started training and fine-tuning publicly available foundation models (FMs) in minutes with state-of-the-art performance. SageMaker HyperPod helps customers scale generative AI model development across hundreds or thousands of AI accelerators with built-in resiliency and performance optimizations, decreasing model training time by up to 40%. However, as FM sizes continue to grow to hundreds of billions of parameters, the process of customizing these models can take weeks of extensive experimenting and debugging. In addition, performing training optimizations to unlock better price performance is often unfeasible for customers, as they often require deep machine learning expertise that could cause further delays in time to market. \n With SageMaker HyperPod recipes, customers of all skill sets can benefit from state-of-the-art performance while quickly getting started training and fine-tuning popular publicly available FMs, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. SageMaker HyperPod recipes include a training stack tested by AWS, removing weeks of tedious work experimenting with different model configurations. You can also quickly switch between GPU-based and AWS Trainium-based instances with a one-line recipe change and enable automated model checkpointing for improved training resiliency. Finally, you can run workloads in production on the SageMaker AI training service of your choice.
SageMaker HyperPod recipes are available in all AWS Regions where SageMaker HyperPod and SageMaker training jobs are supported. To learn more and get started, visit the SageMaker HyperPod page and blog.
Announcing scenarios analysis capability of Amazon Q in QuickSight (preview)
A new scenario analysis capability of Amazon Q in QuickSight is now available in preview. This new capability provides an AI-assisted data analysis experience that helps you make better decisions, faster. Amazon Q in QuickSight simplifies in-depth analysis with step-by-step guidance, saving hours of manual data manipulation and unlocking data-driven decision-making across your organization.\n Amazon Q in QuickSight helps business users perform complex scenario analysis up to 10x faster than spreadsheets. You can ask a question or state your goal in natural language and Amazon Q in QuickSight guides you through every step of advanced data analysis—suggesting analytical approaches, automatically analyzing data, surfacing relevant insights, and summarizing findings with suggested actions. This agentic approach breaks down data analysis into a series of easy-to-understand, executable steps, helping you find solutions to complex problems without specialized skills or tedious, error-prone data manipulation in spreadsheets. Working on an expansive analysis canvas, you can intuitively iterate your way to solutions by directly interacting with data, refining analysis steps, or exploring multiple analysis paths side-by-side. This scenario analysis capability is accessible from any Amazon QuickSight dashboard, so you can move seamlessly from visualizing data to modelling solutions. With Amazon Q in QuickSight, you can easily modify, extend, and reuse previous analyses, helping you quickly adapt to changing business needs.
Amazon Q in QuickSight Pro users can use this new capability in preview in the following AWS regions: US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Q in QuickSight documentation and read the AWS News Blog.
Amazon Bedrock Knowledge Bases now processes multimodal data
Amazon Bedrock Knowledge Bases now enables developers to build generative AI applications that can analyze and leverage insights from both textual and visual data, such as images, charts, diagrams, and tables. Bedrock Knowledge Bases offers end-to-end managed Retrieval-Augmented Generation (RAG) workflow that enables customers to create highly accurate, low-latency, secure, and custom generative AI applications by incorporating contextual information from their own data sources. With this launch, Bedrock Knowledge Bases extracts content from both text and visual data, generates semantic embeddings using the selected embedding model, and stores them in the chosen vector store. This enables users to retrieve and generate answers to questions derived not only from text but also from visual data. Additionally, retrieved results now include source attribution for visual data, enhancing transparency and building trust in the generated outputs.\n To get started, customers can choose between: Amazon Bedrock Data Automation, a managed service that automatically extracts content from multimodal data (currently in Preview), or FMs such as Claude 3.5 Sonnet or Claude 3 Haiku, with the flexibility to customize the default prompt. Multimodal data processing with Bedrock Data Automation is available in the US West (Oregon) region in preview. FM-based parsing is supported in all regions where Bedrock Knowledge Bases is available. For details on pricing for using Bedrock Data Automation or FM as a parser, please refer to the pricing page. To learn more, visit Amazon Bedrock Knowledge Bases product documentation.
Amazon Bedrock Intelligent Prompt Routing is now available in preview
Amazon Bedrock Intelligent Prompt Routing routes prompts to different foundational models within a model family, helping you optimize for quality of responses and cost. Using advanced prompt matching and model understanding techniques, Intelligent Prompt Routing predicts the performance of each model for each request and dynamically routes each request to the model that it predicts is most likely to give the desired response at the lowest cost. Customers can choose from two prompt routers in preview that route requests either between Claude Sonnet 3.5 and Claude Haiku, or between Llama 3.1 8B and Llama 3.1 70B.\n Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. With Intelligent Prompt Routing, Amazon Bedrock can help customers build cost-effective generative AI applications with a combination of foundation models to get better performance at lower cost than a single foundation model. During preview, customers are charged regular on-demand pricing for the models that requests are routed to. Learn more in our documentation and blog.
Announcing GenAI Index in Amazon Kendra
Amazon Kendra is an AI-powered search service enabling organizations to build intelligent search experiences and retrieval augmented generation (RAG) systems to power generative AI applications. Starting today, AWS customers can use a new index - the GenAI Index for RAG and intelligent search. With the Kendra GenAI Index, customers get high out-of-the-box search accuracy powered by the latest information retrieval technologies and semantic models.\n Kendra GenAI Index supports mobility across AWS generative AI services like Amazon Bedrock Knowledge Base and Amazon Q Business, giving customers the flexibility to use their indexed content across different use cases. It is available as a managed retriever in Bedrock Knowledge Bases, enabling customers to create a Knowledge Base powered by the Kendra GenAI Index. Customers can also integrate such Knowledge Bases with other Bedrock Services like Guardrails, Prompt Flows, and Agents to build advanced generative AI applications. The GenAI Index supports connectors for 43 different data sources, enabling customers to easily ingest content from a variety of sources. Kendra GenAI Index is available in the US East (N. Virginia) and US West (Oregon) regions. To learn more, see Kendra GenAI Index in the Amazon Kendra Developer Guide. For pricing, please refer to Kendra pricing page.
AWS Blogs
AWS Japan Blog (Japanese)
- Faster scaling and improved memory efficiency with Amazon ElastiCache for Valkey version 8.0
- Using Amazon ElastiCache as a cache for Amazon Keyspaces (for Apache Cassandra)
- Enhance ecommerce visuals with Avataar Creator Platform
- Sync Okta users and groups with Amazon QuickSight
- Creating conda packages and channels for AWS Deadline Cloud
- An event for beginners! AWS JumpStart 2025 event announcement
- [Event Report & Material Release] Future Engineering Environments Pioneered by Generative AI
AWS News Blog
- Introducing Buy with AWS: an accelerated procurement experience on AWS Partner sites, powered by AWS Marketplace
- Accelerate foundation model training and fine-tuning with new Amazon SageMaker HyperPod recipes
- AWS Education Equity Initiative: Applying generative AI to educate the next wave of innovators
- Solve complex problems with new scenario analysis capability in Amazon Q in QuickSight
- Use Amazon Q Developer to build ML models in Amazon SageMaker Canvas
- Amazon Bedrock Guardrails now supports multimodal toxicity detection with image support (preview)
- New Amazon Bedrock capabilities enhance data processing and retrieval
- Reduce costs and latency with Amazon Bedrock Intelligent Prompt Routing and prompt caching (preview)
- Amazon Bedrock Marketplace: Access over 100 foundation models in one place
- Meet your training timelines and budgets with new Amazon SageMaker HyperPod flexible training plans
- Maximize accelerator utilization for model development with new Amazon SageMaker HyperPod task governance
AWS Big Data Blog
- Simplify data access for your enterprise using Amazon SageMaker Lakehouse
- Enforce fine-grained access control on data lake tables using AWS Glue 5.0 integrated with AWS Lake Formation
- Use open table format libraries on AWS Glue 5.0 for Apache Spark
- Introducing AWS Glue 5.0 for Apache Spark
- Read and write S3 Iceberg table using AWS Glue Iceberg Rest Catalog from Open Source Apache Spark
- Author visual ETL flows on Amazon SageMaker Unified Studio (preview)
- Simplify data integration with AWS Glue and zero-ETL to Amazon SageMaker Lakehouse
- Catalog and govern Amazon Athena federated queries with Amazon SageMaker Lakehouse
- The next generation of Amazon SageMaker: The center for all your data, analytics, and AI
- How ANZ Institutional Division built a federated data platform to enable their domain teams to build data products to support business outcomes
AWS Database Blog
AWS HPC Blog
AWS for Industries
- Accelerate technology innovation and excellence with AWS Consumer Goods Competency Partners
- How agentic AI systems can solve the three most pressing problems in healthcare today
AWS Machine Learning Blog
- Amazon Bedrock Marketplace now includes NVIDIA models: Introducing NVIDIA Nemotron-4 NIM microservices
- Real value, real time: Production AI with Amazon SageMaker and Tecton
- Use Amazon Bedrock tooling with Amazon SageMaker JumpStart models
- A guide to Amazon Bedrock Model Distillation (preview)
- Build generative AI applications quickly with Amazon Bedrock IDE in Amazon SageMaker Unified Studio
- Scale ML workflows with Amazon SageMaker Studio and Amazon SageMaker HyperPod
- Introducing Amazon Kendra GenAI Index – Enhanced semantic search and retrieval capabilities
- Building Generative AI and ML solutions faster with AI apps from AWS partners using Amazon SageMaker