3/14/2025, 12:00:00 AM ~ 3/17/2025, 12:00:00 AM (UTC)
Recent Announcements
AWS Verified Access achieves FedRAMP High and Moderate authorization
AWS Verified Access is a FedRAMP High authorized service in the AWS GovCloud Regions and a FedRAMP Moderate authorized service in the AWS US East and US West commercial regions. Federal agencies, public sector organizations, and other enterprises with FedRAMP compliance requirements can now leverage AWS Verified Access to enable secure VPN-less access to corporate HTTP, non-HTTP applications, and infrastructure resources. Built based on AWS Zero Trust principles, you can use Verified Access to implement a work-from-anywhere model with added security and scalability.\n AWS Verified Access allows admins to define fine-grained access policies based upon a user’s identity and device posture. It evaluates access for each and every connection request and continuously monitors active connections, terminating connections when security requirements specified in the access policies aren’t met. For example, you can centrally define access policies granting Finance applications access only to authenticated users of the Finance group using compliant and managed devices. Further, you can also use Verified Access to enable access to non-HTTP(S) applications and resources such as databases, and SAP and git-repositories running on EC2 instances. Verified Access simplifies your security operations by allowing you to centrally create, group, and manage access policies for all applications and resources with similar security requirements from a single interface. To learn more about AWS Verified Access, visit the product page.
Announcing support of AWS Glue Data Catalog views with AWS Glue 5.0
Today, we announce support for AWS Glue Data Catalog views with AWS Glue 5.0 for Apache Spark jobs. AWS Glue Data Catalog views with AWS Glue 5.0 allows customers to create views from Glue 5.0 Spark jobs that can be queried from multiple engines without requiring access to referenced tables.\n AWS Glue is a serverless, scalable data integration service that makes it simple to discover, prepare, move, and integrate data from multiple sources. AWS Glue Data Catalog views are virtual tables in which the contents are defined by a SQL query that references one or more tables. These views support multiple SQL query engines, so you can access the same view across different AWS services. Administrators can control underlying data access using the rich SQL dialect provided by AWS Glue 5.0 Spark jobs. Access is managed with AWS Lake Formation permissions, including named resource grants, data filters, and lake formation tags. All requests are logged in AWS CloudTrail. AWS Glue Data Catalog views is generally available on AWS Glue 5.0, in all AWS Glue 5.0 regions. To learn more, visit the AWS Glue product page and our documentation.
Amazon Relational Database Service (RDS) for PostgreSQL, MySQL, and MariaDB now supports AWS Graviton4-based M8g database instances in the Europe (Spain), Europe (Stockholm), and Europe (London) Regions and R8g database instances in Europe (Ireland), Europe (Spain), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo) regions. \n Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon RDS open source databases, depending on database engine, version, and workload. M8g and R8g database instances are available on Amazon RDS for PostgreSQL version 17.1 and higher, 16.1 and higher, 15.2 and higher, 14.5 and higher, and 13.8 and higher. M8g and R8g database instances are available on Amazon RDS for MySQL version 8.0.32 and higher, and Amazon RDS for MariaDB version 11.4.3 and higher, 10.11.7 and higher, 10.6.13 and higher, 10.5.20 and higher, and 10.4.29 and higher. For more details on these instances and supported versions for each region, refer to the Amazon RDS User Guide. Get started by creating a fully managed M8g or R8g database instance using the Amazon RDS Management Console. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. For information on specific engine versions that support these DB instance types, please see the Amazon RDS documentation.
Amazon Aurora now supports R8g database instances in additional AWS Regions
AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in Europe (Ireland), Europe (Spain), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo) regions. R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora databases, depending on database engine, version, and workload.\n AWS Graviton4 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. R8g DB instances are available with new 24xlarge and 48xlarge sizes. With these new sizes, R8g DB instances offer up to 192 vCPU, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). You can spin up Gravitona4 R8g database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to Graviton4 requires a simple instance type modification. For more details, refer to the Aurora documentation. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
AWS CodePipeline introduces CodeBuild and Commands rule for stage level condition
AWS CodePipeline V2 type pipeline introduces CodeBuild rule and Commands rule that customers can use in their stage level condition to gate a pipeline execution. You can use CodeBuild rule to start a CodeBuild build or Commands rule to run simple shell commands before exiting a stage, when all actions in the stage have completed successfully, or when any action in the stage has failed.\n These new rules will provide more flexibility to your deployment process and enable more release safety controls. With these two rules, you can run integration tests as a stage level condition when your deployment completes and automatically roll back or fail your deployment when the integration tests fail. You can also run custom cleanup scripts using these new rules when the stage execution fails. To learn more about using these rules in stage level conditions in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. This feature is available in all regions where AWS CodePipeline is supported.
Amazon GuardDuty Extended Threat Detection now available in AWS GovCloud (US) and China Regions
Amazon GuardDuty Extended Threat Detection is now automatically available in AWS GovCloud (US) and China Regions. This capability allows you to identify sophisticated, multi-stage attacks targeting your AWS accounts, workloads, and data. You can now use new attack sequence findings that cover multiple resources and data sources over an extensive time period, allowing you to spend less time on first-level analysis and more time responding to critical-severity threats to minimize business impact.\n GuardDuty Extended Threat Detection uses artificial intelligence and machine learning algorithms trained at AWS scale and automatically correlates security signals from across AWS services to detect critical threats. It identifies attack sequences, such as credential compromise followed by data exfiltration, and represents them as a single, critical-severity finding. The finding includes an incident summary, a detailed events timeline, mapping to MITRE ATT&CK® tactics and techniques, and remediation recommendations. GuardDuty Extended Threat Detection is also available in all AWS commercial Regions where GuardDuty is available. This capability is automatically enabled for all new and existing GuardDuty customers at no additional cost. You do not need to enable all GuardDuty protection plans. However, enabling additional protection plans such as GuardDuty S3 Protection will increase the breadth of security signals, allowing for more comprehensive threat analysis and coverage of attack scenarios. You can take action on findings directly from the GuardDuty console or via its integrations with AWS Security Hub and Amazon EventBridge. To get started, visit the Amazon GuardDuty product page or try GuardDuty free for 30 days on the AWS Free Tier.
Meta’s Llama 3.2 models are now available for fine-tuning in Amazon Bedrock
Amazon Bedrock now supports fine-tuning for Meta’s Llama 3.2 models (1B, 3B, 11B, and 90B), enabling businesses to customize these generative AI models with their own data. Llama 3.2 models are available in various sizes, from small (1B and 3B) to medium-sized multimodal models (11B and 90B). Llama 3.2 11B and 90B models are the first in the Llama series to support both text and vision tasks, achieved by integrating image encoder representations into the language model. Fine-tuning allows you to adapt Llama 3.2 models for domain-specific tasks, enhancing performance for specialized use cases.\n The Llama 3.2 90B model excels in advanced reasoning, long-form text generation, coding, multilingual translation, and image reasoning tasks such as captioning, visual question answering, and document analysis. The Llama 3.2 11B model is designed for content creation, conversational AI, and enterprise applications, with strong performance in text summarization, sentiment analysis, and visual understanding. For resource-constrained scenarios, the lightweight Llama 3.2 1B and 3B models enable on-device applications, excelling in tasks like text summarization, classification, and retrieval while ensuring low latency and enhanced privacy. By fine-tuning Llama 3.2 models in Amazon Bedrock, businesses can further enhance their capabilities for specialized applications, improving accuracy and relevance without needing to build models from scratch. You can fine-tune Llama 3.2 models in Amazon Bedrock in the US West (Oregon) AWS Region. For pricing, visit the Amazon Bedrock pricing page. To get started, see the Amazon Bedrock user guide and visit the Amazon Bedrock console.
Amazon EMR Serverless Streaming jobs is now available in the AWS GovCloud (US) Regions
Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce that Amazon EMR Serverless Streaming jobs, which enables you to continuously analyze and process streaming data, is now available in the AWS GovCloud (US) Regions.\n Streaming has become vital for businesses to gain continuous insights from data sources like sensors, IoT devices, and web logs. However, processing streaming data can be challenging due to requirements such as high availability, resilience to failures, and integration with streaming services. Amazon EMR Serverless Streaming jobs has built-in features to addresses these challenges. It offers high availability through multi-AZ (Availability Zone) resiliency by automatically failing over to healthy AZs. It also offers increased resiliency through automatic job retries on failures and log management features like log rotation and compaction, preventing the accumulation of log files that might lead to job failures. In addition, Amazon EMR Serverless Streaming jobs support processing data from streaming services like self-managed Apache Kafka clusters, Amazon Managed Streaming for Apache Kafka, and now is integrated with Amazon Kinesis Data Streams using a new built-in Amazon Kinesis Data Streams Connector, making it easier to build end-to-end streaming pipelines. To get started, visit the Amazon EMR Serverless Streaming jobs page in the Amazon EMR Serverless User Guide.
Amazon Kinesis Data Streams now supports Internet Protocol version 6
Amazon Kinesis Data Streams now allows customers to make API requests over Internet Protocol version 6 (IPv6). Customers now have the option of using either IPv6 or IPv4 when sending requests over dual-stack public endpoints.\n Kinesis Data Streams allows users to capture, process, and store data streams in real time at any scale. IPv6 increases the number of available addresses by several orders of magnitude, so customers will no longer need to manage overlapping address spaces. Many devices and networks today already use IPv6, and now they can easily write to and read from data streams. Support for IPv6 with Kinesis Data Streams is available in all Regions where Kinesis Data Streams is available, except for AWS GovCloud (US) and China Regions. See here for a full listing of our Regions. To learn more about Kinesis Data Streams, please refer to our Developer Guide.
Amazon EMR Serverless achieves FedRAMP High authorization
Amazon EMR Serverless is now a FedRAMP High authorized service in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. Federal agencies, public sector organizations and other enterprises with FedRAMP High compliance requirements can now leverage EMR Serverless to run Apache Spark and Hive workloads.\n Amazon EMR Serverless is a serverless option that makes it simple for data analysts and engineers to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. The Federal Risk and Authorization Management Program (FedRAMP) is a US government-wide program that delivers a standard approach to the security assessment, authorization, and continuous monitoring for cloud products and services. To get started with Amazon EMR Serverless, visit the User Guide.
AWS CodeConnections adds support for new condition key
AWS CodeConnections now provides greater control to manage the creation of hosts with a new IAM condition key for self-managed GitLab/GitHub Enterprise Server hosts. The new condition key allows you to set up IAM policies to specify the VPC you want all connections to use when accessing your repositories.\n With today’s release, AWS CodeConnections has added a condition key that allows you to enforce policies related to creating or updating hosts to use a specified VPC ID. The new condition key (codeconnections:VpcId) allows you to specify the ID of the VPC you want the corresponding host resource to use. This gives greater control to admins to manage traffic through VPCs related to specific use cases. For example, you can now centralize all use of repository access to a single VPC. To learn more about using the new condition key, visit our documentation. To learn more about what connections in AWS CodeConnections are and how they work, visit our documentation.
Announcing the New AWS Wickr Admin Console
AWS Wickr is excited to announce a redesigned admin experience that’s now fully integrated with the AWS Management Console. We’ve made updates to provide a more intuitive layout, easier navigation, and a more accessible experience. The new console uses the AWS Cloudscape design system for front-end components to give you the consistent and familiar experience you get with the AWS Management Console.\n AWS Wickr is a security-first messaging and collaboration service designed to keep internal and external communications secure, private, and compliant. It protects one-to-one and group messaging, voice and video calling, file sharing, screen sharing, and location sharing with end-to-end encryption. Customers have full administrative control to enforce information governance policies, configure ephemeral messaging, and to log both internal and external conversations in an AWS Wickr network to a private data store for data retention and auditing purposes. AWS Wickr is available in commercial AWS Regions including US East (N. Virginia), Canada (Central), Asia Pacific (Malaysia, Singapore, Sydney, and Tokyo), and Europe (London, Frankfurt, Stockholm, and Zurich). It is also available in GovCloud (US-West) as Department of Defense Impact Level 5 (DoD IL5)-authorized AWS WickrGov. The new console experience will be made available in phases over the coming weeks. Administrators will still be able to access the classic console for a limited period to ensure a smooth transition to the new experience. To learn more and get started, see the following resources:
AWS Wickr Administration Guide
AWS Wickr Product Details
Amazon RDS for SQL Server supports new minor version in February 2025
A new minor version of Microsoft SQL Server is now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports this latest minor version of SQL Server 2019 across the Express, Web, Standard, and Enterprise editions.\n We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor version is SQL Server 2019 CU31 15.0.4420.2. This minor version is available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions. Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.
Amazon S3 Access Grants now authenticate based on the union of both Identity Provider (IdP) and AWS Identity and Access Management (IAM) permissions. This means customers can use AWS machine learning and analytics services such as Amazon SageMaker Unified Studio, Amazon Redshift, and AWS Glue to request access to their S3 data, and S3 Access Grants will grant access to their data after evaluating both their IdP and IAM permissions.\n Now, S3 Access Grants evaluate both IAM and IdP permissions so you no longer have to choose between identity contexts when requesting access to S3. With just a few clicks in the AWS Management Console or a few lines of code using the AWS SDK, you can map S3 permissions to users and groups in an existing corporate directory, such as Entra ID and Okta, or to an IAM user or role. S3 Access Grants automatically update S3 permissions based on end user group membership as users are added and removed from groups in the IdP. Amazon S3 Access Grants are available in all AWS Regions where AWS IAM Identity Center is available. For pricing details, visit Amazon S3 pricing. To learn more about S3 Access Grants, visit the S3 User Guide.
Amazon Data Firehose now delivers real-time streaming data into Amazon S3 Tables
Today, we are excited to announce the general availability of Amazon Data Firehose (Firehose) integration with Amazon S3 Tables, a feature that enables customers to deliver real-time streaming data into Amazon S3 Tables without requiring any code development or multi-step processes.\n Firehose can acquire streaming data from Amazon Kinesis Data Streams, Amazon MSK, Direct PUT API, and AWS Services such as AWS WAF web ACL logs, Amazon VPC Flow Logs. It can then deliver this data to destinations like Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and others for analytics. Now, with the Amazon S3 Table integration, customers can stream data from any of these sources directly into Amazon S3 Tables. As a serverless service, Firehose allows customers to simply setup a stream by configuring the source and destination properties, and pay based on bytes processed. The new feature also enables customers to route records in a data stream to different Amazon S3 tables based on the content of the incoming record. Additionally, customers can automate processing for data correction and right-to-forget scenarios by applying row-level update or delete operations in the destination S3 tables. To get started, visit Amazon Data Firehose documentation and console.
AWS announces new AWS Direct Connect location in Lisbon, Portugal
Today, AWS announced the opening of a new AWS Direct Connect location within the Equinix LS1 data center near Lisbon, Portugal. By connecting your network to AWS at the new location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones. This site is the first AWS Direct Connect location within Portugal. The new Direct Connect location offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.\n The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. For more information on the over 145 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
AWS CodePipeline supports invoking pipeline execution with a new action type
AWS CodePipeline now enables direct pipeline-to-pipeline invocation with a new native action. This feature simplifies triggering downstream pipeline executions and passing pipeline variables and source revisions between pipelines.\n The new CodePipeline Invoke action eliminates the need for workarounds like configuring CodeBuild projects or using the Commands action with custom shell commands. You can now directly specify subsequent pipelines to be executed with pipeline variables and source revisions. For example, when using separate pipelines for Docker image building and deployment, you can pass image digests between pipelines seamlessly. The action also supports cross-account pipeline triggering. To learn more about using the CodePipeline Invoke action in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. This new action is available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions.
AWS Storage Gateway is now available in AWS Asia Pacific (Thailand) Region
AWS Storage Gateway expands availability to the AWS Asia Pacific (Thailand) Region enabling customers to deploy and manage hybrid cloud storage for their on-premises workloads.\n AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud. Visit the AWS Storage Gateway product page to learn more. Access the AWS Storage Gateway console to get started. To see all the Regions where AWS Storage Gateway is available, please visit the AWS Region table.
AWS Blogs
AWS Japan Blog (Japanese)
AWS News Blog
AWS Cloud Operations Blog
AWS Big Data Blog
AWS Compute Blog
Containers
AWS Database Blog
AWS DevOps & Developer Productivity Blog
AWS Machine Learning Blog
- Getting started with computer use in Amazon Bedrock Agents
- Evaluating RAG applications with Amazon Bedrock knowledge base evaluation
AWS for M&E Blog
AWS Messaging & Targeting Blog
AWS Storage Blog
Open Source Project
AWS CLI
AWS CDK
Amplify for JavaScript
- tsc-compliance-test@0.1.79
- 2025-03-13 Amplify JS release - aws-amplify@6.13.5
- @aws-amplify/storage@6.7.15
- @aws-amplify/pubsub@6.1.49
- @aws-amplify/predictions@6.1.49
- @aws-amplify/notifications@2.0.74
- @aws-amplify/interactions@6.1.15
- @aws-amplify/geo@3.0.74
- @aws-amplify/datastore-storage-adapter@2.1.76
- @aws-amplify/datastore@5.0.76