7/23/2024, 12:00:00 AM ~ 7/24/2024, 12:00:00 AM (UTC)

Recent Announcements

AWS Cost Categories now supports “Billing Entity” dimension

AWS Cost Categories has added a new dimension “Billing Entity” to its rules. You can now use eight types of dimensions: “Linked Account”, “Charge Type”, “Service”, “Usage Type”, “Cost Allocation Tags”, “Region”, “Billing Entity” and other “Cost Category” while creating cost categories rules.\n AWS Cost Categories is a feature within the AWS Cost Management product suite that enables you to group cost and usage information into meaningful categories based on your needs. You can create custom categories and map your cost and usage information into these categories based on the rules defined by you using various dimensions. Once cost categories are set up and enabled, you will be able to view and manage your AWS cost and usage information by these categories in AWS cost management services, e.g. understanding the ownership of your spend at the Cost Categories level in AWS Cost Explorer and AWS Cost and Usage Report (CUR). Cost categories can be applied to your AWS cost and usage at the beginning of the month or retroactively for up to 12 months. Adding Billing Entity gives customers more granular control over their cost categories. Billing Entity is the unit to identify whether customers’ invoices or transactions are for AWS Marketplace or for purchases of other AWS services. Cost categories is provided free of charge and this feature is available in all Commercial Regions. To get started with cost categories, please visit AWS Cost Categories product details page, AWS Cost Categories FAQs - Amazon Web Services.

Meta Llama 3.1 generative AI models now available in Amazon SageMaker JumpStart

The most advanced and capable Meta Llama models to date, Llama 3.1, are now available in Amazon SageMaker JumpStart, a machine learning (ML) hub that offers pretrained models and built-in algorithms to help you quickly get started with ML. You can deploy and use Llama 3.1 models with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK.\n Llama 3.1 models demonstrate significant improvements over previous versions due to increased training data and scale. The models support a 128K context length, an increase of 120K tokens from Llama 3. Llama 3.1 models have 16 times the capacity of Llama 3 models and improved reasoning for multilingual dialogue use cases in eight languages. The models can access more information from lengthy text passages to make more informed decisions and leverage richer contextual data to generate more refined responses. According to Meta, Llama 3.1 405B is one of the largest publicly available foundation models and is well suited for synthetic data generation and model distillation, both of which can improve smaller Llama models. For use of synthetic data to fine tune models, you must comply with Meta’s license. Read the EULA for additional information. All Llama 3.1 models provide state-of-the-art capabilities in general knowledge, math, tool use, and multilingual translation. Llama 3.1 models are available today in SageMaker JumpStart in US East (Ohio), US West (Oregon), and US East (N. Virginia) AWS regions. To get started with Llama 3.1 models in SageMaker JumpStart, see documentation and blog.

AWS AppConfig announces feature flag targets, variants, and splits

Today, AWS announces advanced targeting capabilities for AWS AppConfig feature flags. Customers can set up multiple values within flag data, and target those values to fine-grained and high-cardinality user segments. A common use-case for feature flag targets include allow lists, where a customer can specify user IDs or customer tiers, and only enable a new or premium feature for those segments. Another use-case is to split traffic to 15% of your user-base, and experiment with a user experience optimization for a limited cohort of users before rolling the feature out to all users.\n Customers can start using this powerful feature by creating an AWS AppConfig feature flag, set its value, and then create one or more variants of that flag with different variations of data. Customers then create rules to determine which variant should be targeted to specific segments. Once the flag, variants, and rules are created, customers use the latest version of the AWS AppConfig Agent running in EC2, Lambda, ECS, EKS, or on-premises to retrieve the flag data. When requesting flag data, customers pass in context, like user IDs or other user metadata, which are evaluated client-side against the flag rules to return the appropriate and specific data. AWS AppConfig’s feature flag targets, variants, and splits are available in all AWS Regions, including the AWS GovCloud (US) Regions. To get started, use the AWS AppConfig Getting Started Guide.

Amazon ECS now supports Amazon Linux 2023 and more for on-premises container workloads

Amazon Elastic Container Service (Amazon ECS) now supports managing on-premises workloads running on Amazon Linux 2023, Fedora 40, Debian 11, Debian 12, Ubuntu 24, and CentOS Stream 9. Amazon ECS Anywhere is a feature of Amazon ECS that enables you to run and manage container-based applications on-premises, including on your own virtual machines (VMs) and bare metal servers.\n Amazon ECS Anywhere is available in all AWS Regions globally. To learn more visit the ECS Anywhere user guide.

Meta Llama 3.1 generative AI models now available in Amazon Bedrock

The most advanced Meta Llama models to date, Llama 3.1, are now available in Amazon Bedrock. Amazon Bedrock offers a turnkey way to build generative AI applications with Llama. Llama 3.1 models are a collection of 8B, 70B, and 405B parameter size models offering new capabilities for your generative AI applications.\n All Llama 3.1 models demonstrate significant improvements over previous versions. The models support a 128K context length, have 16 times the capacity of Llama 3, and exhibit improved reasoning for multilingual dialogue use cases in eight languages. The models access more information from lengthy text to make more informed decisions and leverage richer contextual data to generate more subtle and refined responses. According to Meta, Llama 3.1 405B is one of the best and largest publicly available foundation models and is well suited for synthetic data generation and model distillation. Llama 3.1 models also provide state-of-the-art capabilities in general knowledge, math, tool use, and multilingual translation. Meta’s Llama 3.1 models are available in Amazon Bedrock in the US West (Oregon) Region. To learn more, read the AWS News launch blog, Llama in Amazon Bedrock product page, and documentation. To get started with Llama 3.1 in Amazon Bedrock, visit the Amazon Bedrock console. To request to be considered for access to the preview of Llama 3.1 405B in Amazon Bedrock, contact your AWS account team or submit a support ticket via the AWS Management Console. When creating the support ticket, select Bedrock as the Service and Models as the Category.

AWS Mainframe Modernization Code Conversion with mLogica is now generally available

We are excited to announce public availability of Mainframe Modernization Code Conversion with mLogica. This new capability enables automated conversion of legacy code written in Assembler language to COBOL. The majority of mainframe environments include Assembler code that is expensive to maintain. Modernization of the code unblocks modernization projects to enable refactor projects, replatform projects, on-mainframe modernization initiatives, within AWS Mainframe Modernization toolchains, or to use alongside third-party modernization toolchains.\n AWS Mainframe Modernization service allows you to modernize and migrate on-premises mainframe applications to AWS. It offers automated refactor and replatform patterns, as well as augmentation patterns via data replication and file transfer. AWS Mainframe Modernization Code Conversion is available through the AWS Mainframe Modernization service console. To learn more, visit AWS Mainframe Modernization service product and documentation pages.

Amazon EKS introduces new controls for Kubernetes version support policy

Today, Amazon EKS announces new controls for Kubernetes version policy, allowing cluster administrators to configure end of standard support behavior for EKS clusters. This behavior can easily be set through the EKS Console and CLI. Kubernetes version policy control is available for Kubernetes versions in standard support.\n Controls for Kubernetes version policy makes it easier for you to choose which clusters should enter extended support and which clusters can be automatically upgraded at the end of standard support. This control provides the flexibility for you to balance version upgrades against business requirements depending on the environment or applications running per cluster. Controls for Kubernetes version policy is available in all AWS regions. To learn more about the controls for Kubernetes version policy, refer to the EKS documentation.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Big Data Blog

AWS Database Blog

AWS HPC Blog

AWS for Industries

AWS Machine Learning Blog

Networking & Content Delivery

Open Source Project

AWS CLI

AWS CDK

Amplify for JavaScript

Amplify UI