6/18/2024, 12:00:00 AM ~ 6/19/2024, 12:00:00 AM (UTC)
Recent Announcements
Amazon DataZone launches custom blueprint configurations for AWS services
Amazon DataZone launches custom blueprint configurations for AWS services allowing customers to optimize resource usage and costs by using existing AWS Identity and Access Management (IAM) roles and/or AWS services, such as Amazon S3. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls.\n Amazon DataZone’s blueprints can help administrators define which AWS tools and services will be deployed for data producers like data engineers or data consumers like data scientists, simplifying access to data and increasing collaboration among project members. Custom blueprints for AWS services adds to the family of Amazon Datazone blueprints including the data lake, data warehouse, and Amazon SageMaker blueprints. With custom blueprints, administrators can include Amazon DataZone into their data pipelines by using existing IAM roles to publish existing data assets, owned by those roles, to the catalog, thereby establishing governed sharing of those data assets and enhancing governance across the entire infrastructure.
Amazon EC2 C7g and R7g instances are now available in additional regions
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g and R7g instances are available are now available in Europe (Milan), Asia Pacific (Hong Kong) and South America (São Paulo) Regions. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.\n Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).
Amazon EC2 C7g and R7g are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central), Asia Pacific (Hyderabad, Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo), China (Beijing, Nginxia), Europe (Frankfurt, Ireland, London, Milan, Spain, Stockholm) and South America (São Paulo). To learn more, see Amazon EC2 C7g and R7g. To learn how to migrate your workloads to AWS Graviton-based instances, see the AWS Graviton Fast Start Program.
CodeCatalyst allows customers to use Amazon Q Developer to choose a blueprint
Today, AWS announces the general availability of a new capability of Amazon Q Developer in Amazon CodeCatalyst. Customers can now use Amazon Q to help them pick the best blueprint for their needs when getting started with a new project or on an existing project. Before, customers had to read through the descriptions of available blueprints to try and pick the best match. Now customers can describe what they want to create and receive direct guidance about which blueprint to pick for their needs. Amazon Q will also create an issue in the project for each requirement that isn’t included in the resources created by the blueprint. Users can then customize their project by assigning those issues to developers to add that functionality. They can even choose to assign these issues to Amazon Q itself, which will then attempt to create code to solve the problem.\n Customers can use blueprints to create projects in CodeCatalyst that include resources, such as a source repository with sample code, CI/CD workflows that build and test your code, and integrated issue tracking tools. Customers can now use Amazon Q to help them create projects or add functionality to existing projects with blueprints. If the space has custom blueprints, Amazon Q Developer will learn and include these in its recommendations. For more information, see the documentation or visit the Amazon CodeCatalyst website. This capability is available in regions where CodeCatalyst and Amazon Bedrock are available. There is no change to pricing.
AWS Glue Usage Profiles is now generally available
Today, AWS announces general availability of AWS Glue Usage Profiles, a new cost control capability that allows admins to set preventatives controls and limits over resources consumed by their Glue jobs and Notebook sessions. With AWS Glue Usage Profiles, admins can create different cost profiles for different classes of users. Each profile is a unique set of parameters that can be assigned to different types of users. For example, a cost profile for data engineer working on production pipeline could have unrestricted number of workers whereas the cost profile for a test user could have a restricted number of workers.\n You can get started by creating a new usage profile with AWS Glue Studio console or by using the Glue Usage Profiles APIs. Next, you assign that profile to an IAM user or role. After following these steps, all new Glue jobs or sessions created with that particular IAM user or role, will have the limits specified in the assigned usage profile.
Amazon MWAA now supports Custom Web Server URLs
Amazon Managed Workflows for Apache Airflow (MWAA) now supports custom domain names for the Airflow web server, simplifying access to the Airflow user interface.\n Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure. Amazon MWAA now adds the ability to customize the redirection URL that MWAA’s single sign-on (SSO) uses after authenticating the user against their IAM credentials. This allows customers that use private web servers with load balancers, custom DNS entries, or proxies to point users to a user-friendly web address while maintaining the simplicity of MWAA’s IAM integration. You can launch or upgrade an Apache Airflow environment with a custom URL on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about custom domains visit the Amazon MWAA documentation. Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
Announcing AWS Transfer Family workshop to build event-driven managed file transfer workflows
You can now learn how to automate your Managed File Transfer (MFT) workflows in AWS using event-driven architectures with a new workshop in AWS Workshop Studio. This workshop, titled Event-Driven MFT Workshop, provides hands on labs for using AWS Transfer Family in combination with other AWS services to modernize your business-to-business file transfer and file-processing workflows.\n AWS Transfer Family provides fully managed, scalable and secure file transfers to AWS storage over SFTP, AS2, FTPS, and FTP protocols. This workshop will guide you through building bidirectional workflows to transfer files between remote SFTP servers and Amazon S3, and automate processing of the transferred files to prepare them for integration with your applications and data lakes in AWS. The solution demonstrates how to use AWS Transfer Family’s SFTP connectors to transfer files, Amazon EventBridge to listen and respond to file transfer events, AWS Step Functions for defining workflows, and AWS Lambda to execute file processing pre or post transfer such as PGP based file encryption and decryption. To get started with the workshop, visit Transfer Family Event-Driven MFT Workshop. To learn more about AWS Transfer Family, see the AWS Transfer Family product page.
Amazon EC2 D3 instances are now available in Europe (Paris) region
Starting today, Amazon EC2 D3 instances, the latest generation of the dense HDD-storage instances, are available in the Europe (Paris) region.\n Amazon EC2 D3 instances are powered by 2nd generation Intel Xeon Scalable Processors (Cascade Lake) and provide up to 48 TB of local HDD storage. D3 instances are ideal for workloads including distributed / clustered file systems, big data and analytics, and high capacity data lakes. With D3 instances, you can easily migrate from previous-generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads. D3 instances are offered in 4 sizes - xlarge, 2xlarge, 4xlarge, and 8xlarge. D3 is available for purchase with Savings Plans, Reserved Instances, Convertible Reserved, On-Demand, and Spot instances, or as Dedicated Instances. To get started with D3 instances, visit the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. To learn more, visit the EC2 D3 instances page.
Amazon Connect Cases is now available in additional Asia Pacific regions
Amazon Connect Cases is now available in the Asia Pacific (Seoul) and Asia Pacific (Tokyo) AWS regions. Amazon Connect Cases provides built-in case management capabilities that make it easy for your contact center agents to create, collaborate on, and quickly resolve customer issues that require multiple customer conversations and follow-up tasks.
Amazon Redshift Query Editor V2 now supports 100MB file uploads
Amazon Redshift Query Editor V2 now supports uploading local files up to 100MB in size when loading data into your Amazon Redshift databases. This increased file size limit provides more flexibility for ingesting larger datasets directly from your local environment.\n With the new 100MB file size limit, data analysts, engineers, and developers can now load larger datasets from local files into their Redshift clusters or workgroups using Query Editor V2. This enhancement is particularly beneficial when working with CSV, JSON, or other structured data files that previously exceeded the 5MB limit. By streamlining the upload process for sizeable local files, you can expedite data ingestion and analysis workflows on Amazon Redshift. To learn more, see the Amazon Redshift documentation.
Amazon OpenSearch Serverless now available in South America (Sao Paulo) region
We are excited to announce the availability of Amazon OpenSearch Serverless in the South America (Sao Paulo) region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless automatically provisions and scales resources to provide consistently fast data ingestion rates and millisecond response times during changing usage patterns and application demand.\n With the support in the South America (Sao Paulo) region, OpenSearch Serverless is now available in 12 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), and South America (Sao Paulo). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.
Introducing Maven, Python, and NuGet support in Amazon CodeCatalyst package repositories
Today, AWS announces the support for Maven, Python, and NuGet package formats in Amazon CodeCatalyst package repositories. CodeCatalyst customers can now securely store, publish, and share Maven, Python, and NuGet packages, using popular package managers such as mvn, pip, nuget and more. Through your CodeCatalyst package repositories, you can also access open source packages from from 6 additional public package registries. Your packages remain available for your development teams, should public packages and registries become unavailable from other service providers.
Amazon now offers a capability to analyze issues and recommend granular tasks
Amazon CodeCatalyst now offers a new capability powered by Amazon Q to help customers analyze issues and recommend granular tasks. These tasks can then be individually assigned to users or to Amazon Q itself, helping you accelerate work. Before, customers could create issues to track work that needs to be done on a project and they needed to manually create more granular tasks that can be assigned to others on the team. Now customers can ask Amazon Q to analyze an issue for complexity and suggest ways of breaking up the work into individual tasks.\n This capability is available in the PDX region. For more information, see the documentation or visit the Amazon CodeCatalyst website.
Amazon Kinesis Video Streams is now available in AWS GovCloud (US) Regions
Amazon Kinesis Video Streams is now available in AWS GovCloud (US-East and US-West) Regions. Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for storage, analytics, machine learning (ML), playback, and other processing. Amazon Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Rekognition Video and Amazon Sagemaker.\n For more information, please visit the Amazon Kinesis Video Streams product page, and see the AWS region table for complete regional availability information. Note that Amazon Kinesis Video Streams with WebRTC is not yet available in AWS GovCloud (US) Regions.
Amazon Redshift announces support for VARBYTE 16MB data type
Amazon Redshift has extended the VARBYTE data type from the current 1,024,000 bytes maximum size (see the VARBYTE What’s New announcement from December 2021) to 16,777,216 bytes max size. VARBYTE is a variable size data type for storing and representing variable-length binary strings. With this announcement, Amazon Redshift will support all existing VARBYTE functionality with 16MB VARBYTE values. VARBYTE data type can now ingest data larger than 1,024,000 bytes from Parquet, CSV and text file formats. The default size for a VARBYTE(n) column (if n is not specified) remains 64,0000 bytes.\n VARBYTE 16MB support is now available in all commercial AWS Regions. Refer to the AWS Region Table for Amazon Redshift availability. For more information or to get started with Amazon Redshift VARBYTE data type, see the documentation.
AWS Glue serverless Spark UI now supports rolling log files
Today, AWS announces rolling log file support for AWS Glue serverless Apache Spark UI. Serverless Spark UI enable you to get detailed information about your AWS Glue Spark jobs. With rolling log support, you can use AWS Glue serverless Spark UI to see detailed information for long-running batch or streaming jobs. Rolling log files enables you to monitor and debug large batch and streaming Glue jobs.
AWS Blogs
AWS Japan Blog (Japanese)
- Modernize data observability with zero ETL integration between Amazon OpenSearch Service and Amazon S3
- One-click deployment of ELYZA’s Japanese LLM with Amazon SageMaker JumpStart
- [Contribution] Strengthening ransomware countermeasures by using Amazon FSx for NetApp ONTAP immutable backup
- AWS-generated AI case by Nowcast Co., Ltd.: LLM business application in financial statements data extraction business
- Chaos Kitty will return with a power-up for AWS Summit Japan 2024!
- Continuous data retention using Amazon DynamoDB incremental exports
AWS Cloud Operations & Migrations Blog
- Automate CloudWatch Dashboard creation for your AWS Elemental Mediapackage and AWS Elemental Medialive
- Improve application reliability with effective SLOs
AWS Big Data Blog
- Build multimodal search with Amazon OpenSearch Service
- Introducing AWS Glue usage profiles for flexible cost control
Containers
Front-End Web & Mobile
AWS Machine Learning Blog
- Improving air quality with generative AI
- Use zero-shot large language models on Amazon Bedrock for custom named entity recognition
- Safeguard a generative AI travel agent with prompt engineering and Guardrails for Amazon Bedrock
- Streamline financial workflows with generative AI for email automation
AWS for M&E Blog
AWS Security Blog
Open Source Project
AWS CLI
Amplify for JavaScript
- tsc-compliance-test@0.1.40
- 2024-06-18 Amplify JS release - aws-amplify@6.3.7
- @aws-amplify/pubsub@6.1.10
- @aws-amplify/auth@6.3.6
- @aws-amplify/adapter-nextjs@1.2.5