9/4/2024, 12:00:00 AM ~ 9/5/2024, 12:00:00 AM (UTC)

Recent Announcements

Bedrock Agents on Sonnet 3.5

Agents for Amazon Bedrock enable developers to create generative AI-based applications that can complete complex tasks for a wide range of use cases and deliver answers based on company knowledge sources. In order to complete complex tasks, with high accuracy, reasoning capabilities of the underlying foundational model (FM) play a critical role.\n Today, Amazon Bedrock customers in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore) can leverage Claude 3.5 Sonnet with their Bedrock Agents. Claude 3.5 Sonnet is Anthropic’s latest foundation model and ranks among the best in the world. Claude 3.5 Sonnet delivers improved speed, performance and agentic reasoning compared with Claude 3 Opus. Additionally, with this model, Bedrock Agents now supports the Anthropic recommended tool use for function calling which leads to an improved developer and end user experience. To learn more, read the Claude in Amazon Bedrock product page and documentation. To get started with Claude 3.5 Sonnet in Amazon Bedrock, visit the Amazon Bedrock console. To learn more about the list of models supported on Bedrock Agents, visit the documentation page.

AWS AppSync enhances API monitoring with new DEBUG and INFO logging levels

Today, AWS announces the addition of DEBUG and INFO logging levels for AWS AppSync GraphQL APIs. These new logging levels provide more granular control over log verbosity and make it easier to troubleshoot your APIs while optimizing readability and costs.\n With DEBUG and INFO levels, alongside the existing ERROR and ALL levels, customers now have greater flexibility to capture relevant log information at the appropriate level of detail. This allows customers to more precisely pinpoint and resolve issues by sending just the right amount of information to their Amazon CloudWatch Logs. Customers can now log messages from their code with the “error”, “log”, and “debug” functions and configure the level at which logs will be sent to CloudWatch Logs on their API. The API logging level can be changed at any time without having to change any resolver or function code. For example, an API’s logging level can be set to DEBUG during development and troubleshooting but changed to INFO in production. The logging level can be set to ALL to see additional trace information. The new logging levels are available in all AWS Regions where AppSync is supported. To learn more about AppSync’s new logging levels and how to implement them in your GraphQL APIs, see the AWS AppSync Developer Guide.

Use Apache Spark on Amazon EMR Serverless directly from Amazon Sagemaker Studio

You can now run petabyte-scale data analytics and machine learning on Amazon EMR Serverless directly from Amazon SageMaker Studio notebooks. EMR Serverless automatically provisions and scales the required resources, allowing you to focus on your data and models without having to configure, optimize, tune, or manage clusters. EMR Serverless automatically installs and configures open source frameworks and provides a performance-optimized runtime that is compatible with and faster than standard open source.\n With this release, you can now visually create and browse EMR Serverless applications directly from SageMaker Studio and connect to them in a few simple clicks. Once connected to an EMR Serverless application, you can use Spark SQL, Scala, Python to interactively query, explore and visualize data, and run Apache Spark jobs to process data directly from Studio Notebooks. Jobs run fast because they use EMR’s performance-optimized versions of Spark. For e.g. Spark on EMR 7.1 is 4.5x faster than it’s open source equivalent. EMR Serverless offers fine-grained automatic scaling, which provisions and quickly scales the compute and memory resources to match the requirements of your application and you pay for only what you use. These features are supported on SageMaker Distribution 1.10 and above, and are generally available in all AWS Regions where SageMaker Studio is available. To learn more, read the blog Use LangChain with PySpark for Processing documents at massive scale with Amazon SageMaker Studio and EMR Serverless, or the SageMaker Studio documentation here.

Amazon SES announces enhanced onboarding with adaptive setup wizard and Virtual Deliverability Manager

Today, Amazon Simple Email Service (SES) launched enhancements to its onboarding experience to help customers easily discover and activate key SES features. The SES console now features an adaptive setup page that brings recommendations for optimal setup to the forefront. Additionally, the update introduces the option to enable the Virtual Deliverability Manager (VDM) within the initial onboarding wizard, offering maximum guidance from the beginning of the setup process.\n Previously, the SES onboarding process focused primarily on the initial steps of authenticating domains and attaining production access. With an on-demand advisor check, this workflow guides customers to begin sending authenticated email that meets mailbox provider requirements with DKIM-alignment, SPF-alignment, and a DMARC policy. Now, the enhanced onboarding experience empowers customers to optimize their email deliverability from the start. Customers can easily configure email monitoring, provision dedicated IP addresses, and adjust sending limits to meet their projected volume - all through the adaptive setup page that detects their current configuration and provides tailored recommendations. With these enhancements, SES customers can be confident they are setting up their email infrastructure for long-term success from the beginning of their SES journey. SES offers a guided onboarding experience in all AWS regions where SES is available. For more information, please visit the documentation for getting started with SES.

Stability AI’s Top 3 Text-to-Image Models Now Available in Amazon Bedrock

Stable Image Ultra, Stable Diffusion 3 Large (SD3 Large), and Stable Image Core models from Stability AI are now generally available in Amazon Bedrock. These models will empower customers in various industries, including media, marketing, retail, and game development, to generate high-quality visuals with unprecedented speed and precision.\n All three of these models are capable of generating stunningly photo-realistic images with exceptional detail, color accuracy, and lifelike lighting. Each model caters to diverse use cases:

Stable Image Ultra – produces the highest quality, photo-realistic outputs, making it perfect for professional print media and large-format applications. This model excels at rendering exceptional detail and realism.

Stable Diffusion 3 Large – strikes an ideal balance between generation speed and output quality, making it ideal for creating high-volume, high-quality digital assets like websites, newsletters, and marketing materials.

Stable Image Core – optimized for fast and affordable image generation, making it great for rapidly iterating on concepts during the ideation phase.

These models enable customers to streamline creative processes, swiftly adapt to market trends, drive innovation through visual brainstorming, and gain a competitive advantage by boosting productivity, reducing costs, and improving visual communication across business functions. Stability AI’s Stable Image Ultra, SD3 Large, and Stable Image Core models are now available in Amazon Bedrock in the US West (Oregon) AWS region. To learn more read the AWS News Blog or visit the Stability AI in Amazon Bedrock product page, and documentation. To get started with SD3 in Amazon Bedrock, visit the Amazon Bedrock console.

Amazon Timestream for InfluxDB now supports enhanced management features

We are excited to announce the launch of enhanced management options for Amazon Timestream for InfluxDB, allowing you to scale your instance sizes up or down as needed and update your deployment configuration between Single-AZ and Multi-AZ, giving you greater flexibility and control over your time-series data processing and analysis.\n Timestream for InfluxDB is extensively used in applications that require high-performance time-series data processing and analysis. You can quickly respond to changes in data ingestion rates, query volumes, or other workload fluctuations by scaling your instances sizes up and down, ensuring that your Timestream for InfluxDB instances always have the necessary resources to handle your workload and cost effectively. You can also change your availability configuration by moving between Single-AZ and Multi-AZ configurations depending on your needs and budget. This means you can focus on building and deploying your applications, rather than worrying about instance sizing and management. Amazon Timestream for InfluxDB is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Europe (Ireland), Europe (Frankfurt), and Europe (Stockholm). You can create a Amazon Timestream Instance from the Amazon Timestream console, AWS Command line Interface (CLI), or SDK, and AWS CloudFormation. To learn more about compute scaling for Amazon Timestream for InfluxDB, visit the product page, documentation, and pricing page.

Amazon EC2 X2idn instances now available in Middle East (Bahrain) region

Starting today, memory-optimized Amazon Compute Cloud (Amazon EC2) X2idn instances are available in Middle East (Bahrain) region. These instances, powered by 3rd generation Intel Xeon Scalable Processors and built with AWS Nitro System, are designed for memory-intensive workloads. They deliver improvements in performance, price performance, and cost per GiB of memory compared to previous generation X1 instances. These instances are SAP-certified for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, SAP BW/4HANA, and SAP NetWeaver workloads on any database.\n With this launch, X2idn are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (Oregon, N. California), Africa (Cape Town), Asia Pacific (Hyderabad, Jakarta, Malaysia, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), China (Beijing, Nginxia), Middle East (Bahrain, Dubai), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Spain, Zurich), Canada (Central), South America (São Paulo), and the AWS GovCloud (US-East, US-West) Regions. X2idn is available for purchase with Savings Plans, Reserved Instances, Convertible Reserved, On-Demand, and Spot instances, or as Dedicated instances or Dedicated hosts. To learn more, visit the EC2 X2i Instances Page, or connect with your AWS Support contacts.

AWS Blogs

AWS Japan Blog (Japanese)

AWS News Blog

AWS Cloud Operations Blog

AWS Big Data Blog

AWS Compute Blog

AWS Database Blog

AWS for Industries

AWS Machine Learning Blog

AWS for M&E Blog

AWS Quantum Technologies Blog

AWS Security Blog

Open Source Project

AWS CLI

Amplify for JavaScript