5/24/2024, 12:00:00 AM ~ 5/27/2024, 12:00:00 AM (UTC)

Recent Announcements

Mistral Small foundation model now available in Amazon Bedrock

The Mistral Small foundation model from Mistral AI is now generally available in Amazon Bedrock. You can now access four high-performing models from Mistral AI in Amazon Bedrock including Mistral Small, Mistral Large, Mistral 7B, and Mixtral 8x7B, further expanding model choice. Mistral Small is a highly efficient large language model optimized for high-volume, low-latency language-based tasks. It provides outstanding performance at a cost-effective price point. Key features of Mistral Small include retrieval-augmented generation (RAG) specialization, coding proficiency, and multilingual capabilities.\n Mistral Small is perfectly suited for straightforward tasks that can be performed in bulk, such as classification, customer support, or text generation. The model specializes in RAG ensuring important information is retained even in long context windows, which can extend up to 32K tokens. Mistral Small excels in code generation, review, and commenting, supporting all major coding languages. Mistral Small also has multilingual capabilities delivering top-tier performance in English, French, German, Spanish, and Italian; it also supports dozens of other languages. The model also comes with built-in efficient guardrails for safety. Mistral AI’s Mistral Small foundation model is now available in Amazon Bedrock in the US East (N. Virginia) AWS region. To learn more, read the AWS News launch blog, Mistral AI in Amazon Bedrock product page, and documentation. To get started with Mistral Small in Amazon Bedrock, visit the Amazon Bedrock console.

Connect your Jupyter notebooks to Amazon EMR Serverless using Apache Livy endpoints

Today, we are excited to announce that Amazon EMR Serverless now supports endpoints for Apache Livy. Customers can now securely connect their Jupyter notebooks and manage Apache Spark workloads using Livy’s REST interface.\n Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple and cost effective for data engineers and analysts to run petabyte-scale data analytics in the cloud. With the Livy endpoints, setting up a connection is easy - just point your Livy client in your on-premises notebook running Sparkmagic kernels to the EMR Serverless endpoint URL. You can now interactively query, explore and visualize data, and run Spark workloads using Jupyter notebooks without having to manage clusters or servers. In addition, you can use the Livy REST APIs for use cases that need interactive code execution outside notebooks.

PostgreSQL 17 Beta 1 is now available in Amazon RDS Database Preview Environment

Amazon RDS for PostgreSQL 17 Beta 1 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 17 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 17 Beta 1 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database.\n PostgreSQL 17 includes updates to vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for JSON_TABLE features that can convert JSON to a standard PostgreSQL table. The MERGEcommand now supports the RETURNING clause, letting you further work with modified rows. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions. Please refer to the PostgreSQL community announcement for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.

Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.30

Kubernetes version 1.30 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.30. Starting today, you can create new EKS clusters using v1.30 and upgrade your existing clusters to v1.30 using the Amazon EKS console, the eksctl command line interface, or through an infrastructure-as-code tool.\n Kubernetes version 1.30 includes stable support for pod scheduling readiness and minimum domains parameter for PodTopologySpread constraints. As a reminder, starting with Kubernetes version 1.30 or newer, any newly created managed node groups will automatically default to using AL2023 as the node operating system. For detailed information on major changes in Kubernetes version 1.30, see the Kubernetes project release notes. Kubernetes v1.30 support for Amazon EKS is available in all AWS Regions where Amazon EKS is available, including the AWS GovCloud (US) Regions. You can learn more about the Kubernetes versions available on Amazon EKS and instructions to update your cluster to version 1.30 by visiting Amazon EKS documentation. Amazon EKS Distro builds of Kubernetes v1.30 are available through ECR Public Gallery and GitHub. Learn more about the Amazon EKS version lifecycle policies in the documentation.

AWS Launches Console-based Bulk Policy Migration for Billing and Cost Management Console Access

AWS Billing and Cost Management console now supports a console-based simplified migration experience for affected policies containing retired IAM actions (aws-portal). Customers, who are not migrated to using fine-grained IAM actions, can trigger this experience by clicking on Update IAM Policies recommended action available on the Billing and Cost Management home page. The experience identifies affected policies, suggests equivalent new actions to match customers’ current access, provides testing options, and completes the migration of all affected policies across the organization.\n The experience automatically identifies required new fine-grained actions, making it easy for customers to maintain their current access post-migration. The experience provides flexibility of testing with a few accounts and rollback changes with a button click, making the migration a risk-free operation for customers. Moreover, the experience provides optional customization opportunity for customers to broaden or fine-tune their access by modifying the aws-recommended IAM action mapping as well as migrating select accounts one at a time.

AWS Chatbot now supports tagging of AWS Chatbot resources

AWS Chatbot now enables customers to tag AWS Chatbot resources. Tags are simple key-value pairs that customers can assign to AWS resources such as AWS Chatbot channel configurations to easily organize, search, identify resources, and control access.\n Prior to today, customers could not tag AWS Chatbot resources. As a result, they could not use tag-based controls to manage access to AWS Chatbot resources. By tagging AWS Chatbot resources, customers can now enforce tag-based controls in their environments. Customers can manage tags for AWS Chatbot resources using the AWS CLI, SDKs, or AWS Management Console. AWS Chatbot support for tagging Chatbot resources is available at no additional cost in all AWS Regions where AWS Chatbot service is offered. To learn more, visit the AWS Chatbot Tagging your AWS Chatbot resources documentation page.

Introducing the Amazon Kinesis Data Streams Apache Spark Structured Streaming Connector for Amazon EMR

We are excited to announce the launch of the Amazon Kinesis Data Streams Connector for Spark Structured Streaming on Amazon EMR. The new connector makes it easy for you to build real-time streaming applications and pipelines that consume Amazon Kinesis Data Streams using Apache Spark Structured Streaming. Starting Amazon EMR 7.1, the connector comes pre-packaged on Amazon EMR on EKS, EMR on EC2 and EMR Serverless. Now, you do not need to build or download any packages and can focus on building your business logic using the familiar and optimized Spark Data Source APIs when consuming data from your Kinesis data streams.\n Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at massive scale. Amazon EMR is the cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using Apache Spark and other open-source frameworks. The new Amazon Kinesis Data Streams Connector for Apache Spark is faster, more scalable, and fault-tolerant than alternative open-source options. The connector also supports Enhanced Fan-out consumption with dedicated read throughput. To learn more and see a code example, go to Build Spark Structured Streaming applications with the open source connector for Amazon Kinesis Data Streams.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

Containers

AWS Database Blog

AWS for M&E Blog

Open Source Project

AWS CLI

AWS CDK