8/7/2025, 12:00:00 AM ~ 8/8/2025, 12:00:00 AM (UTC)
Recent Announcements
AWS Lambda now supports GitHub Actions to simplify function deployment
AWS Lambda now enables you to use GitHub Actions to automatically deploy Lambda functions when you push code or configuration changes to your GitHub repository, streamlining your continuous integration and continuous deployment (CI/CD) pipeline for serverless applications. \n GitHub Actions allow application development teams to automate their software delivery process, enabling CI/CD workflows that automatically build, test, and deploy code changes whenever developers push updates to their repositories. Previously, development teams building serverless applications using Lambda had to write custom scripts or AWS Command Line Interface (AWS CLI) commands to update Lambda functions from GitHub Actions. This required them to manually package function code artifacts, configure AWS Identity and Access Management (IAM) permissions, and set up error handling. This process led to repetitive boilerplate code across repositories, increased onboarding time for new developers, and increased risk of deployment errors. Starting today, the new GitHub action provides a simplified way to deploy changes to Lambda functions using declarative configuration in GitHub Actions workflows, eliminating the complexity of manual deployment steps. This action supports both .zip file and container image deployments, handles code packaging automatically, and integrates seamlessly with IAM using OpenID Connect (OIDC) authentication. To get started, add the “Deploy Lambda Function” action to your GitHub Actions workflow file with configuration parameters for your Lambda function deployment. The action supports configuring function settings including runtime, memory size, timeout, and environment variables, optional “dry run” mode for validation without making changes, and Amazon S3-based deployment support for larger .zip file packages. To learn more, visit the Lambda developer guide and README for the “Deploy Lambda Function” GitHub action. You can use this GitHub action for your Lambda functions in all commercial AWS Regions where Lambda is available.
Amazon EC2 C7g instances now available in additional regions
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g instances are available in the AWS Middle East (Bahrain), AWS Africa (Cape Town), and AWS Asia Pacific (Jakarta) regions. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.\n Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS). To learn more, see Amazon EC2 C7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
AWS Outposts servers now support service link static configuration
Today, AWS announces the general availability of static networking configuration support for the service link and DNS IP addresses of AWS Outposts servers. This new feature enables customers to configure the service link interface and DNS IP addresses of their Outposts servers with static IP addresses during installation, eliminating the requirement for Dynamic Host Configuration Protocol (DHCP) servers in their data centers.\n This enhancement is valuable for customers with stringent networking security requirements who cannot use DHCP servers in their data centers. These customers can now configure IP addresses manually for the service link connection while maintaining their security standards.
This feature is available in all AWS Regions where Outposts servers are supported.
Customers need to configure this feature during the Outposts servers installation. To learn more, visit the Outposts server installation guide.
Amazon Bedrock now available in the Asia Pacific (Melbourne) Region
Beginning today, customers can use Amazon Bedrock in the Asia Pacific (Melbourne) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.\n Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models and other FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities, such as Guardrails and Model customization, that customers need to build generative AI applications with security, privacy, and responsible AI built into Amazon Bedrock. These capabilities help customers build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.
Amazon EKS adds safety control to prevent accidental cluster deletion
Amazon Elastic Kubernetes Service (EKS) now supports deletion protection, helping you prevent accidental termination of your EKS clusters. When enabled, deletion protection requires explicit disablement before a cluster can be deleted, providing an additional safety control for critical environments.\n Deletion protection is turned off by default for all new and existing clusters. You can enable deletion protection during cluster creation or any time after. To delete a protected cluster, you must first disable deletion protection for the cluster and then proceed with the cluster deletion. This two-step verification process helps prevent unintended deletions that could result from automation errors or accidental commands, especially in environments where multiple users share cluster management responsibilities. Once enabled, any attempt to delete the cluster through the AWS Management Console, EKS APIs, AWS Command Line Interface (CLI), eksctl, or infrastructure as code tools like AWS CloudFormation will be blocked until deletion protection is disabled. This feature is available in all commercial AWS Regions and the AWS GovCloud (US) Regions. To learn more, visit the Amazon EKS documentation.
Amazon EC2 R7gd instances are now available in additional AWS Regions
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7gd instances with up to 3.8 TB of local NVMe-based SSD block-level storage are available in Africa (Cape Town), Asia Pacific (Seoul), Europe (Milan) and Israel (Tel Aviv) Regions.\n R7gd are powered by AWS Graviton3 processors with DDR5 memory are built on the AWS Nitro System. They are ideal for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics and are a great fit for applications that need access to high-speed, low latency local storage, including those that need temporary storage of data for scratch space, temporary files, and caches. They have up to 45% improved real-time NVMe storage performance than comparable Graviton2-based instances. Graviton3-based instances also use up to 60% less energy for the same performance than comparable EC2 instances, enabling you to reduce your carbon footprint in the cloud. To learn more, see Amazon R7gd Instances. To get started, see the AWS Management Console.
Amazon EC2 M7gd instances are now available in Asia Pacific (Seoul) Region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7gd instances with up to 3.8 TB of local NVMe-based SSD block-level storage are available in Asia Pacific (Seoul) Region.\n These Graviton3-based instances with DDR5 memory are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage, including those that need temporary storage of data for scratch space, temporary files, and caches. They have up to 45% improved real-time NVMe storage performance than comparable Graviton2-based instances. Graviton3-based instances also use up to 60% less energy for the same performance than comparable EC2 instances, enabling you to reduce your carbon footprint in the cloud. To learn more, see Amazon EC2 M7gd instances. To get started, see the AWS Management Console.
AWS Deadline Cloud now supports Autodesk VRED
AWS Deadline Cloud now supports submitting VRED rendering jobs from within Autodesk VRED as well as using the Deadline Cloud Client. AWS Deadline Cloud is a fully-managed service that simplifies render management for teams creating computer-generated graphics and visual effects for films, television and broadcasting, web content, and design. This new integration with Autodesk VRED extends AWS Deadline Cloud’s capabilities to support Automotive and Manufacturing customer segments, which demand high-fidelity 3D design visualization at scale.\n With AWS Deadline Cloud, you can submit Autodesk VRED render jobs from anywhere without having to manage your own render farm infrastructure. Autodesk VRED brings your complex 3D visualization data to life for collaboratively developing digital prototypes. You can now send them to AWS Deadline Cloud to rapidly render visualizations for review and iteration. The Deadline Cloud for VRED Submitter (for Windows) is available via an installer and via the AWS Deadline Cloud GitHub repository. To learn more, please see the AWS Deadline Cloud documentation and the Deadline Cloud for VRED GitHub repository.
Amazon EC2 M7i instances are now available in the Middle East (UAE) Region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Middle East (UAE) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.\n M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit Amazon EC2 M7i Instances. To get started, see the AWS Management Console.
AWS Budgets now supports Billing View for cross-account cost monitoring
AWS announces support for Billing View in AWS Budgets, enabling organizations to create budgets that span multiple member accounts without requiring access to the management account. This integration helps organizations better align monitoring their spend with their business structure and operational needs.\n With this enhancement, you can create budgets based on filtered views of cost management data based on cost allocation tags or specific AWS accounts in your organization. For example, engineering leaders can create budgets for applications that span multiple accounts using views filtered by cost allocation tags, while FinOps teams can create organization-wide budgets using unfiltered views - all without requiring management account access. This helps streamline budget management while maintaining security best practices by minimizing management account access. This feature is available in all AWS Regions where AWS Budgets and Billing View are available, except the AWS GovCloud (US) Regions and the China Regions. To learn more about AWS Budgets and Billing View integration, refer to AWS Budgets and Billing View in the AWS Cost Management User Guide.
Amazon OpenSearch Serverless introduces automatic semantic enrichment
Amazon OpenSearch Serverless now offers automatic semantic enrichment, a breakthrough feature that simplifies semantic search implementation. You can now boost your search relevance with minimal effort, eliminating complex manual configurations through an automated setup process.\n Semantic search goes beyond keyword matching by understanding the context and meaning of search queries. For example, when searching for “how to treat a headache,” semantic search intelligently returns relevant results about “migraine remedies” or “pain management techniques” even when these exact terms aren’t present in the query. Previously, implementing semantic search required ML (Machine Learning) expertise, model hosting, and OpenSearch integration. Automatic semantic enrichment simplifies this process dramatically. With automatic semantic enrichment, you simply specify which fields need semantic search capabilities. OpenSearch Service handles all semantic enrichment automatically during data ingestion. The feature launches with support for two language variants: English-only and Multi-lingual, covering 15 languages including Arabic, Chinese, Finnish, French, Hindi, Japanese, Korean, Spanish, and more. You pay only for actual usage during data ingestion, with no ongoing costs for storage or search queries. This new feature is automatically enabled for all serverless collections and is now available in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Spain), Europe (Stockholm). To get started, visit our documentation, read blog, watch video, and check semantic search pricing. Check the AWS Regional Services List for availability in your region..
Amazon Location - Geofencing now supports multipolygon and polygon with exclusion zones
Amazon Location Service now supports multi-polygons and exclusion zones, simplifying creation and geofencing of complex boundaries. This allows customers to create geofences for non-contiguous areas - for example, defining California’s boundaries including offshore territories like the Catalina Islands. These enhanced geofences are fully integrated with existing workflows and can be accessible through both the AWS console and programmatically via API and SDK support.\n Amazon Location Service is a location-based service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Amazon Location service is available in 17 AWS regions across 5 continents. To learn more, visit the Amazon Location Service Developer Guide.
Amazon OpenSearch Serverless adds support for Hybrid Search, AI connectors, and automations
Amazon OpenSearch Serverless announces support for Neural Search, Hybrid Search, Workflow API, and AI connectors. This new set of APIs facilitates use cases such as retrieval augmented generation (RAG) and semantic search.\n Neural search enables semantic queries through text and images instead of vectors. Neural search uses a high-level API with connectors to Amazon SageMaker, Amazon Bedrock, and other AI services to generate enrichments like dense or sparse vectors during query and ingestion. Hybrid search enables combining lexical, neural, and k-NN (vector) queries to deliver higher search relevancy. The workflow API allows you to package OpenSearch AI resources like models, connectors, and pipelines into templates to automate multi-step configurations required to enable AI features such as neural search, and simplified integration with specific model providers like Amazon Bedrock, Cohere, OpenAI or DeepSeek. Neural Search, Hybrid Search, Workflow API, and AI connectors are enabled for all serverless collections in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Spain), Europe (Stockholm). Check the AWS Regional Services List for availability in your region. For more information about these features, please see the documentation for Neural Search, Hybrid Search, Workflow API, and AI connectors. To learn more about Amazon OpenSearch Serverless, please visit the product page.
Amazon Aurora Serverless v2 now offers up to 30% performance improvement
Amazon Aurora Serverless v2 now offers up to 30% improved performance for databases running on the latest serverless platform version (version 3). Aurora Serverless v2 measures capacity in ACUs where each ACU is a combination of approximately 2 gibibytes (GiB) of memory, corresponding CPU, and networking. You specify the capacity range and the database scales within this range to support your application’s needs. The version 3 serverless platform version supports scaling from 0 up to 256 Aurora Capacity Units (ACUs).\n With improved performance, you can now use Aurora Serverless for even more demanding workloads. All new clusters, database restores, and new clones will launch on the latest platform version. Existing clusters can be upgraded by stopping and restarting the cluster or by using Blue/Green Deployments. You can determine the cluster’s platform version in the AWS Console’s instance configuration section or via the RDS API’s ServerlessV2PlatformVersion parameter for a DB cluster. The latest platform version is available in all AWS Regions including the AWS GovCloud (US) Regions. Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. For pricing details and Region availability, visit Amazon Aurora Pricing. To learn more, read the documentation, and get started by creating an Aurora Serverless v2 database using only a few steps in the AWS Management Console.
AWS Blogs
AWS Japan Blog (Japanese)
- Amazon Aurora DSQL in the gaming industry
- Amazon Connect Update Summary — July 2025
- Overcome development chaos with the Amazon Q Developer CLI custom agent
AWS Japan Startup Blog (Japanese)
AWS Compute Blog
AWS for Industries
Artificial Intelligence
- The DIVA logistics agent, powered by Amazon Bedrock
- Automate enterprise workflows by integrating Salesforce Agentforce with Amazon Bedrock Agents
- How Amazon Bedrock powers next-generation account planning at AWS