11/29/2024, 12:00:00 AM ~ 12/2/2024, 12:00:00 AM (UTC)
Recent Announcements
Announcing Amazon EC2 I8g instances
AWS is announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) storage optimized I8g instances. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous generation I4g instances. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.\n I8g instances offer instance sizes up to 24xlarge, 768 GiB of memory, and 22.5 TB instance storage. They are ideal for real-time applications like relational databases, non-relational databases, streaming databases, search queries and data analytic. I8g instances are available in the following AWS Regions: US East (N. Virginia) and US West (Oregon). To learn more, see Amazon EC2 I8g instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Amazon Web Services announces declarative policies
Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.\n Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies. To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.
Amazon OpenSearch Service zero-ETL integration with Amazon Security Lake
Amazon OpenSearch Service now offers a zero-ETL integration with Amazon Security Lake, enabling you to query and analyze security data in-place directly through OpenSearch. This integration allows you to efficiently explore voluminous data sources that were previously cost-prohibitive to analyze, helping you streamline security investigations and obtain comprehensive visibility of your security landscape. By offering the flexibility to selectively ingest data and eliminating the need to manage complex data pipelines, you can now focus on effective security operations while potentially lowering your analytics costs.\n Using the powerful analytics and visualization capabilities in OpenSearch Service, you can perform deeper investigations, enhance threat hunting, and proactively monitor your security posture. Pre-built queries and dashboards using the Open Cybersecurity Schema Framework (OCSF) can further accelerate your analysis. The built-in query accelerator boosts performance and enables fast-loading dashboards, enhancing your overall experience. This integration empowers you to accelerate investigations, uncover insights from previously inaccessible data sources, optimize analytics efficiency and costs, with minimal data migration. OpenSearch Service zero-ETL integration with Security Lake is now generally available in 13 regions globally: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), US East (Ohio), US East (N. Virginia), US West (Oregon), South America (São Paulo), Europe (Paris), and Canada (Central). To learn more on using this capability, see the OpenSearch Service Integrations page and the OpenSearch Service Developer Guide. To learn more about how to configure and share Security Lake, see the Get Started Guide.
Amazon CloudWatch and Amazon OpenSearch Service launch an integrated analytics experience
Amazon Web Services announces a new integrated analytics experience and zero-ETL integration between Amazon CloudWatch and Amazon OpenSearch Service for customers to get the best of both services. CloudWatch customers can now leverage OpenSearch’s Piped Processing Language (PPL) and OpenSearch SQL. Additionally, CloudWatch customers can accelerate troubleshooting with out-of-the-box curated dashboards for vended logs like Amazon Virtual Private Cloud (VPC), AWS CloudTrail, and AWS Web Application Firewall (WAF). OpenSearch customers can now analyze CloudWatch Logs without having to duplicate data.\n With this integration, CloudWatch Logs customers have two more query languages for log analytics, in addition to CloudWatch Logs Insights QL. Customers can use SQL to analyze data, correlate logs using JOINS, sub-queries, and use SQL functions, namely, JSON, mathematical, datetime, and string functions for intuitive log analytics. They can also use the OpenSearch PPL to filter, aggregate and analyze their data. With a few clicks, CloudWatch Logs customers can create OpenSearch dashboards for VPC, WAF, and CloudTrail logs to monitor, analyze, and troubleshoot using visualizations derived from the logs. OpenSearch customers no longer have to copy logs from CloudWatch for analysis, or create ETL pipelines. Now, they can use OpenSearch Discover to analyze CloudWatch logs in-place, build indexes and dashboards on CloudWatch Logs. This is now available in the regions where OpenSearch Service direct query is available. Please read pricing and free tier details on Amazon CloudWatch Pricing, and OpenSearch Service Pricing. To get started, please refer to Amazon CloudWatch Logs vended dashboard and Amazon OpenSearch Service Developer Guide.
Introducing Amazon EC2 next generation high density Storage Optimized I7ie instances
Amazon Web Services is announcing general availability for next generation high density Storage Optimized I7ie instances. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances have the highest local NVMe storage density in the cloud for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.\n I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. I7ie instances also deliver 40% better compute performance to run more complex queries without increasing the storage density per vCPU. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks. I7ie instances deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). I7ie instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, London), and Asia Pacific (Tokyo). Customers can use these instances with On Demand and Savings Plan purchase options. To learn more, visit the I7ie instances page.
Announcing the general availability of Amazon MemoryDB Multi-Region
Today, AWS announces the general availability of Amazon MemoryDB Multi-Region, a fully managed, active-active, multi-Region database that lets you build multi-Region applications with up to 99.999% availability and microsecond read and single-digit millisecond write latencies. MemoryDB is a fully managed, Valkey- and Redis OSS-compatible database service providing multi-AZ durability, microsecond read and single-digit millisecond write latency, and high throughput. Valkey is an open source, high performance, key-value data store—stewarded by Linux Foundation—and is a drop-in replacement of Redis OSS. \n With MemoryDB Multi-Region, you can build highly available multi-Region applications for increased resiliency. It offers active-active replication so you can serve reads and writes locally from the Regions closest to your customers with microsecond read and single-digit millisecond write latency. MemoryDB Multi-Region asynchronously replicates data between Regions and typically propagates data within a second. It automatically resolves update conflicts and corrects data divergence issues, so you can focus on building your application.
Get started with MemoryDB Multi-Region from the AWS Management Console or using the latest AWS SDK or AWS Command Line Interface (AWS CLI). First, you need to identify the set of AWS Regions where you want to replicate your data. Then choose an AWS Region to create a new multi-Region cluster and a regional cluster. Once the first regional cluster is created, you can add up to four additional Regions to the multi-Region cluster.
MemoryDB Multi-Region is available for Valkey in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London). To learn more, please visit the MemoryDB features page, getting started blog, and documentation. For pricing, please refer to the MemoryDB pricing page.
Storage Browser for Amazon S3 is now generally available
Amazon S3 is announcing the general availability of Storage Browser for S3, an open source component that you can add to your web applications to provide your end users with a simple interface for data stored in S3. With Storage Browser for S3, you can provide authorized end users, such as customers, partners, and employees, with access to easily browse, download, and upload data in S3 directly from your own applications. Storage Browser for S3 is available in the AWS Amplify React and JavaScript client libraries.\n With the general availability of Storage Browser for S3, your end users can now search for their data based on file name and can copy and delete data they have access to. Additionally, Storage Browser for S3 now automatically calculates checksums of the data your end users upload and blocks requests that do not pass these durability checks. We welcome your contributions and feedback on our roadmap, which outlines the plan for adding new capabilities to Storage Browser for S3. Storage Browser for S3 is backed by AWS Support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To learn more and get started, visit the AWS News Blog and the UI documentation.
Amazon S3 adds new default data integrity protections
Amazon S3 updates the default behavior of object upload requests with new data integrity protections that build upon S3’s existing durability posture. The latest AWS SDKs now automatically calculate CRC-based checksums for uploads as data is transmitted over the network. S3 independently verifies these checksums and accepts objects after confirming that data integrity was maintained in transit over the public internet. Additionally, S3 now stores a CRC-based whole-object checksum in object metadata, even for multipart uploads, which helps you to verify the integrity of an object stored in S3 at any time.\n S3 has always validated the integrity of object uploads from the S3 API to storage by calculating MD5 checksums and allowed customers to provide their own pre-calculated MD5 checksums for integrity validation. S3 also supports five additional checksum algorithms, CRC64NVME, CRC32, CRC32C, SHA-1, and SHA-256, for integrity validations on upload and download. Using checksums for data validation is a best practice for data durability, and this new default behavior adds additional data integrity protections with no changes to your applications and at no additional cost. Default checksum protections are rolling out across all AWS Regions in the next few weeks. To get started, you can use the AWS Management Console or the latest AWS SDKs to upload objects. To learn more about checksums in S3, visit the AWS News Blog and the S3 User Guide.
Amazon EC2 introduces Allowed AMIs to enhance AMI governance
Amazon EC2 introduces Allowed AMIs, a new account-wide setting that enables you to limit the discovery and use of Amazon Machine Images (AMIs) within your AWS accounts. You can now simply specify the AMI owner accounts or AMI owner aliases permitted within your account, and only AMIs from these owners will be visible and available to you to launch EC2 instances.\n Prior to today, you could use any AMI explicitly shared with your account or any public AMI, regardless of its origin or trustworthiness, putting you at risk of accidentally using an AMI that didn’t meet your organization’s compliance requirements. Now with Allowed AMIs, your administrators can specify the accounts or owner aliases whose AMIs are permitted for discovery and use within your AWS environment. This streamlined approach provides guardrails to reduce the risk of inadvertently using non-compliant or unauthorized AMIs. Allowed AMIs also supports an audit-mode functionality to identify EC2 instances launched using AMIs not permitted by this setting, helping you identify non-compliant instances before the setting is applied. You can apply this setting across AWS Organizations and Organizational Units using Declarative Policies, allowing you to manage and enforce this setting at scale. Allowed AMI setting only applies to public AMIs and AMIs explicitly shared with your AWS accounts. By default, this setting is disabled for all AWS accounts. You can enable it by using the AWS CLI, SDKs, or Console. To learn more, please visit our documentation.
Announcing Amazon EKS Auto Mode
Today at re:Invent, AWS announced Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode, a new feature that fully automates compute, storage, and networking management for Kubernetes clusters. Amazon EKS Auto Mode simplifies running Kubernetes by offloading cluster operations to AWS, improves the performance and security of your applications, and helps optimize compute costs. \n You can use EKS Auto Mode to get Kubernetes conformant managed compute, networking, and storage for any new or existing EKS cluster. This makes it easier for you to leverage the security, scalability, availability, and efficiency of AWS for your Kubernetes applications. EKS Auto Mode removes the need for deep expertise, ongoing infrastructure management, or capacity planning by automatically selecting the best EC2 instances to run your application. It helps optimize compute costs while maintaining application availability by dynamically scaling EC2 instances based on demand. EKS Auto Mode provisions, operates, secures, and upgrades EC2 instances within your account using AWS-controlled access and lifecycle management. It handles OS patches and updates and limits security risks with ephemeral compute, which strengthens your security posture by default.
EKS Auto Mode is available today in all AWS Regions, except AWS GovCloud (US) and China Regions. You can enable EKS Auto Mode in any EKS cluster running Kubernetes 1.29 and above with no upfront fees or commitments—you pay for the management of the compute resources provisioned, in addition to your regular EC2 costs.
To get started with EKS Auto Mode, use the EKS API, AWS Console, or your favorite infrastructure as code tooling to enable it in a new or existing EKS cluster. To learn more about EKS Auto Mode and how it can streamline your Kubernetes operations, visit the EKS Auto Mode feature page and see the AWS News launch blog.
AWS announces Amazon CloudWatch Database Insights
AWS announces the general availability of Amazon CloudWatch Database Insights with support for Amazon Aurora PostgreSQL and Amazon Aurora MySQL. Database Insights is a database observability solution that provides a curated experience designed for DevOps engineers, application developers, and database administrators (DBAs) to expedite database troubleshooting and gain a holistic view into their database fleet health.\n Database Insights consolidates logs and metrics from your applications, your databases, and the operating systems on which they run into a unified view in the console. Using its pre-built dashboards, recommended alarms, and automated telemetry collection, you can monitor the health of your database fleets and use a guided troubleshooting experience to drill down to individual instances for root-cause analysis. Application developers can correlate the impact of database dependencies with the performance and availability of their business-critical applications. This is because they can drill down from the context of their application performance view in Amazon CloudWatch Application Signals to the specific dependent database in Database Insights. You can get started with Database Insights by enabling it on your Aurora clusters using the Aurora service console, AWS APIs, and SDKs. Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis. Database Insights is available in all public AWS Regions and applies a new vCPU-based pricing – see pricing page for details. For further information, visit the Database Insights documentation.
AWS Control Tower launches managed controls using declarative policies
Today, we are excited to announce the general availability of managed, preventive controls implemented using declarative policies in AWS Control Tower. These policies are a set of new optional controls that help you consistently enforce the desired configuration for a service. For example, customers can deploy a declarative, policy-based preventive control that disallows public sharing of Amazon Machine Images (AMIs). Declarative policies help you ensure that the controls configured are always enforced regardless of the introduction of new APIs, or when new principals or accounts are added.\n Today, AWS Control Tower is releasing declarative, policy-based preventive controls for Amazon Elastic Compute Cloud (Amazon EC2) service, Amazon Virtual Private Cloud (Amazon VPC) and Amazon Elastic Block Store (Amazon EBS). These controls help you achieve control objectives such as limit network access, enforce least privilege, and manage vulnerabilities. AWS Control Tower’s new declarative policy-based preventive controls complement AWS Control Tower’s existing control capabilities, enabling you to disallow actions that lead to policy violations. The combination of preventive, proactive, and detective controls helps you monitor whether your multi-account AWS environment is secure and managed in accordance with best practices. For a full list of AWS regions where AWS Control Tower is available, see AWS Region Table.
Announcing Amazon EKS Hybrid Nodes
Today, AWS announces the general availability of Amazon Elastic Kubernetes Service (Amazon EKS) Hybrid Nodes. With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your on-premises and edge applications.\n You can now manage Kubernetes applications running on-premises and in edge environments to meet low-latency, local data processing, regulatory, or policy requirements using the same Amazon EKS clusters, features, and tools as applications running in AWS Cloud. Amazon EKS Hybrid Nodes works with any on-premises hardware or virtual machines, bringing the efficiency, scalability, and availability of Amazon EKS to wherever your applications need to run. You can use a wide range of Amazon EKS features with Amazon EKS Hybrid Nodes including Amazon EKS add-ons, EKS Pod Identity, cluster access management, cluster insights, and extended Kubernetes version support. Amazon EKS Hybrid Nodes is natively integrated with various AWS services including AWS Systems Manager, AWS IAM Roles Anywhere, Amazon Managed Service for Prometheus, Amazon CloudWatch, and Amazon GuardDuty for centralized monitoring, logging, and identity management.
Amazon EKS Hybrid Nodes is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. Amazon EKS Hybrid Nodes is currently available for new Amazon EKS clusters. With Amazon EKS Hybrid Nodes, there are no upfront commitments or minimum fees, and you are charged per hour for the vCPU resources of your hybrid nodes when they are attached to your Amazon EKS clusters.
To get started and learn more about Amazon EKS Hybrid Nodes, see the Amazon EKS Hybrid Nodes User Guide, product webpage, pricing webpage, and AWS News Launch blog.
Amazon Bedrock now supports Rerank API to improve accuracy of RAG applications
Amazon Bedrock announces support for reranker models through the Rerank API, enabling developers to improve the relevance of responses in Retrieval-Augmented Generation (RAG) applications. The reranker models rank a set of retrieved documents based on their relevance to user’s query, helping to prioritize the most relevant content to be passed to the foundation models (FM) for response generation. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end RAG workflows to create custom generative AI applications by incorporating contextual information from various data sources. For Amazon Bedrock Knowledge Base users, enabling the reranker is through a setting available in Retrieve and RetrieveAndGenerate APIs.\n Semantic search in RAG systems can improve document retrieval relevance but may struggle with complex or ambiguous queries. For example, a customer service chatbot asked about returning an online purchase might retrieve documents on both return policies and shipping guidelines. Without proper ranking, the generated response could focus on shipping instead of returns, missing the user’s intent. Now, Amazon Bedrock provides access to reranking models which will address this by reordering retrieved documents based on their relevance to the user query. This ensures the most useful information is sent to the foundation model for response generation, optimizing the context window usage and potentially reducing costs. The Rerank API supports Amazon Rerank 1.0 and Cohere Rerank 3.5 models. These models are available in US West (Oregon), Canada (Central), Europe (Frankfurt) and Asia Pacific (Tokyo). Please visit the Amazon Bedrock product documentation. For details on pricing, please refer to the pricing page.
AWS simplifies the use of third-party block storage arrays with AWS Outposts
Starting today, customers can attach block data volumes backed by NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™ to Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts directly from the AWS Management Console. This makes it easier for customers to leverage third-party storage with Outposts. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.\n With this enhancement, Outpost customers can combine the cloud capabilities offered by Outposts with advanced data management features, high density storage, and high performance offered by NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. Today, customers can use Amazon Elastic Block Store (Amazon EBS) and Local Instance Store volumes to store and process data locally and comply with data residency requirements. Now, with this enhancement, they can do so while leveraging the external volumes backed by compatible third-party storage. By leveraging the new enhancement, customers can maximize value from their existing storage investments, while benefiting from the cloud operational model enabled by Outposts. This enhancement is available on Outposts racks and Outposts 2U servers at no additional charge in all AWS Regions where Outposts is available, except the AWS GovCloud Regions. See the FAQs for Outposts servers and Outposts racks for the latest availability information. You can use the AWS Management Console or CLI to attach the third-party block data volumes to Amazon EC2 instances on Outposts. To learn more, check out this blog post.
AWS announces Invoice Configuration
Today, AWS announces the general availability of Invoice Configuration, which enables you to customize your invoicing experience to receive separate AWS invoices based on your organizational structure. This enables you to group AWS accounts according to your internal business entities such as legal entities, subsidiaries, cost centers etc. and receive separate AWS invoices for each of your business entities, within the same AWS Organization. A separate invoice per business entity enables you to track invoices separately, thus enabling faster processing of AWS Invoices by removing manual processes to split the AWS invoice on an entity level.\n With Invoice Configuration, you can create Invoice Units, which are groups of member accounts, that best represent your business entities and then designate a member or management account as the receiver for the invoice of the business entity. You can optionally associate a purchase order by Invoice Unit and visualize charges by Invoice Units using Cost Categories in Cost explorer and Cost and Usage Report. You can either use the Invoice Configuration through the AWS Billing and Cost management console or access it through the AWS SDKs or AWS CLI to programmatically create and manage Invoice Units. Invoice Configuration is available in all public AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To learn more visit the product page, blog post, or review the User Guide and API Reference.
Announcing AWS Transfer Family web apps
AWS Transfer Family web apps are a new resource that you can use to create a simple interface for accessing your data in Amazon S3 through a web browser. With Transfer Family web apps, you can provide your workforce with a fully managed, branded, and secure portal for your end users to browse, upload, and download data in S3.\n Transfer Family offers fully managed file transfers over SFTP, FTPS, FTP, and AS2, enabling seamless workload migrations with no need to change your third-party clients or their configurations. Now, you can also enable browser-based transfers for non-technical users in your organization through a user-friendly interface. Transfer Family web apps are integrated with AWS IAM Identity Center and S3 Access Grants, enabling fine-grained access controls that map corporate identities in your existing directories directly to S3 datasets. With a few clicks in the Transfer Family console, you can generate a shareable URL for your web app. Then, your authenticated users can start accessing data you authorize them to access through their web browsers. Transfer Family web apps are available in select AWS Regions. You can get started with Transfer Family web apps in the Transfer Family console. For pricing, visit the Transfer Family pricing page. To learn more, read the AWS News Blog or visit the Transfer Family User Guide.
PartyRock improves app discovery and announces upcoming free daily use
Starting today, PartyRock is supporting improved app discovery using search, making it even easier to explore and build with generative AI. In addition, a new and improved daily free usage model will replace the current free trial grant in 2025 to further empower everyone to build AI apps on PartyRock with daily recurring free use.\n Previously, AWS offered new PartyRock users a free trial for a limited time, but starting in 2025 you can access and experiment with PartyRock apps, without the worry of exhausting the free trial credits through a free daily use grant. Since its launch in November 2023, more than a half million apps have been created by PartyRock users. Until now, discovering those apps required link or playlist sharing, or browsing featured apps on the PartyRock Discover page. Users can now use the search bar on the homepage to explore all publicly published PartyRock apps. Discover how you can build apps to help improve your everyday individual productivity and experiment with these new features by trying PartyRock today. To learn more, read our AWS News Blog.
Amazon S3 launches storage classes for AWS Dedicated Local Zones
You can now use the Amazon S3 Express One Zone and S3 One Zone-Infrequent Access storage classes in AWS Dedicated Local Zones. Dedicated Local Zones are a type of AWS infrastructure that is fully managed by AWS, built for exclusive use by you or your community, and placed in a location or data center specified by you to help you comply with regulatory requirements.\n In Dedicated Local Zones, these storage classes are purpose-built to store data in a specific data perimeter, helping to support your data isolation and data residency use cases. To learn more, visit the S3 User Guide.
Amazon Bedrock Knowledge Bases now supports custom connectors and ingestion of streaming data
Amazon Bedrock Knowledge Bases now supports custom connector and ingestion of streaming data, allowing developers to add, update, or delete data in their knowledge base through direct API calls. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company’s data sources. With this new capability, customers can easily ingest specific documents from custom data sources or Amazon S3 without requiring a full sync, and ingest streaming data without the need for intermediary storage.\n This enhancement enables customers to ingest specific documents from any custom data source and reduce latency and operational costs for intermediary storage while ingesting streaming data. For instance, a financial services firm can now keep its knowledge base continuously updated with the latest market data, ensuring that their GenAI applications deliver the most relevant information to end-users. By eliminating time-consuming full syncs and storage steps, customers gain faster access to data, reducing latency, and improving application performance. Customers can start using this feature either through the console or programmatically via the APIs. In the console, users can select a custom connector as the data source, then add documents, text, or base64 encoded text strings. This capability is available in all regions where Amazon Bedrock Knowledge Bases is supported. There is no additional cost for using this new custom connector capability. To learn more, visit Amazon Bedrock Knowledge Bases product documentation.
AWS DMS Schema Conversion now uses generative AI
AWS Database Migration Service (AWS DMS) Schema Conversion with generative AI is now available. The feature is currently available for database schema conversion from commercial engines, such as Microsoft SQL Server, to Amazon Aurora PostgreSQL-Compatible Edition and Amazon Relational Database Service (Amazon RDS) for PostgreSQL.\n Using generative AI recommendations, you can simplify and accelerate your database migration projects, particularly when converting complex code objects which typically require manual conversion, such as stored procedures, functions, or triggers. AWS DMS Schema Conversion with generative AI converts up to 90% of your schema.
AWS DMS Schema Conversion with generative AI is currently available in three AWS Regions: US East (N. Virginia), US West (Oregon), and Europe (Frankfurt).
You can use this feature in the AWS Management Console or AWS Command Line Interface (AWS CLI) by selecting a commercial database such as Microsoft SQL Server as your source database and Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL as your target when initiating a schema conversion project. When converting applicable objects, you will see an option to enable generative AI for conversion. To get started, visit the AWS DMS Schema Conversion User Guide and check out this blog post.
AWS Verified Access now supports secure access to resources over non-HTTP(S) protocols (Preview)
Today, AWS announces the preview of AWS Verified Access’ new feature that supports secure access to resources that connect over protocols such as TCP, SSH, and, RDP. With this launch, Verified Access enables you to provide secure, VPN-less access to your corporate applications and resources using AWS zero trust principles. This feature eliminates the need to manage separate access and connectivity solutions for your non-HTTP(S) resources on AWS and simplifies security operations.\n Verified Access evaluates each access request in real time based on the user’s identity and device posture, using fine-grained policies. With this feature, you can extend your existing Verified Access policies to enable secure access to non-HTTP(S) resources such as git-repositories, databases, and a group of EC2 instances. For example, you can create centrally managed policies that grant SSH access across your EC2 fleet to only authenticated members of the system administration team, while ensuring that connections are permitted only from compliant devices. This simplifies your security operations by allowing you to create, group, and manage access policies for applications and resources with similar security requirements from a single interface. This feature of AWS Verified Access is available in preview in 18 AWS regions: US East (Ohio), US East (Northern Virginia), US West (N California), US West (Oregon), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Milan), Europe (Stockholm), South America (São Paulo), and, Israel (Tel Aviv). To learn more, visit the product page, launch blog and documentation.
AWS Blogs
AWS Japan Blog (Japanese)
- Amazon FSx for Lustre increases GPU instance throughput by up to 15 times
- Amazon EBS time-based snapshot copy
- Reduce compute costs for stream processing applications with Kinesis Client Library 3.0
- Benefits of PLM on AWS in manufacturing
- What are the next developments for VMware workloads on AWS?
- The trajectory of the new wind “Mitsubishi Electric AWS User Group (commonly known as MAWS-UG)” created by Mitsubishi Electric Group engineers
- Contribution: Introduction of Contributions at Cloud Measurments of Smart Meter Systems by Kansai Transmission and Distribution, Inc. (Part 3) — Second Half
- Contribution: Introduction of Contributions at Cloud Measurments of Smart Meter Systems by Kansai Transmission and Distribution, Inc. (Part 3) — First Half
- Contribution: Introduction of Contributions at Cloud Measurments of Smart Meter Systems by Kansai Transmission and Distribution, Inc. (Part 2) — Second Half
- Contribution: Introduction of Contributions at Cloud Measurments of Smart Meter Systems by Kansai Transmission and Distribution, Inc. (Part 2) — First Half
AWS News Blog
- New APIs in Amazon Bedrock to enhance RAG applications, now available
- Connect users to data through your apps with Storage Browser for Amazon S3
- Introducing new PartyRock capabilities and free daily usage
- Amazon MemoryDB Multi-Region is now generally available
- Introducing default data integrity protections for new objects in Amazon S3
- AWS Database Migration Service now automates time-intensive schema conversion tasks using generative AI
- Simplify governance with declarative policies
- AWS Verified Access now supports secure access to resources over non-HTTP(S) protocols (in preview)
- Announcing AWS Transfer Family web apps for fully managed Amazon S3 file transfers
- Introducing Amazon OpenSearch Service and Amazon Security Lake integration to simplify security analytics
- Use your on-premises infrastructure in Amazon EKS clusters with Amazon EKS Hybrid Nodes
- Streamline Kubernetes cluster management with new Amazon EKS Auto Mode
- Introducing storage optimized Amazon EC2 I8g instances powered by AWS Graviton4 processors and 3rd gen AWS Nitro SSDs
- Now available: Storage optimized Amazon EC2 I7ie instances
- New Amazon CloudWatch Database Insights: Comprehensive database observability from fleets to instances
- New Amazon CloudWatch and Amazon OpenSearch Service launch an integrated analytics experience
AWS Cloud Financial Management
AWS Big Data Blog
AWS Compute Blog
- NEW: Simplifying the use of third-party block storage with AWS Outposts
- Faster scaling with Amazon EC2 Auto Scaling Target Tracking
The Internet of Things on AWS – Official Blog
- Unlocking the Power of Edge Intelligence with AWS
- AWS IoT Services alignment with US Cyber Trust Mark
AWS Machine Learning Blog
- Cohere Rerank 3.5 is now available in Amazon Bedrock through Rerank API
- AWS DeepRacer: How to master physical racing?
- Easily deploy and manage hundreds of LoRA adapters with SageMaker efficient multi-adapter inference
- Improve the performance of your Generative AI applications with Prompt Optimization on Amazon Bedrock
AWS Security Blog
Open Source Project
Amplify UI
- @aws-amplify/ui-vue@4.2.25
- @aws-amplify/ui-react-storage@3.5.0
- @aws-amplify/ui-react-notifications@2.0.37
- @aws-amplify/ui-react-native@2.2.19
- @aws-amplify/ui-react-liveness@3.1.18
- @aws-amplify/ui-react-geo@2.0.33
- @aws-amplify/ui-react-core-notifications@2.0.32
- @aws-amplify/ui-react-core@3.1.1
- @aws-amplify/ui-react-ai@1.1.0
- @aws-amplify/ui-react@6.7.1