Advertisement · 728 × 90

Posts by AWS News(Unofficial)

Five new Qwen models for coding agents and efficient reasoning are now available in Amazon SageMaker JumpStart Today, AWS announced the availability of Qwen3-Coder-Next, Qwen3-30B-A3B, Qwen3-30B-A3B-Thinking-2507, Qwen3-Coder-30B-A3B-Instruct, and Qwen3.5-4B in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These five models from Qwen bring specialized capabilities spanning agentic coding, efficient reasoning, extended thinking, and multimodal understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure. These models address different enterprise AI challenges with specialized capabilities: Qwen3-Coder-Next excels at long-horizon reasoning, complex tool use, and recovery from execution failures, making it ideal for powering coding agents in CLI/IDE platforms. Qwen3-30B-A3B uniquely supports seamless switching between thinking and non-thinking modes, making it well suited for general-purpose assistant tasks like multilingual dialogue, math reasoning, and tool calling. Qwen3-30B-A3B-Thinking-2507 delivers significantly improved performance on complex reasoning tasks in math, science, and coding, with enhanced long-context understanding. Qwen3-Coder-30B-A3B-Instruct is designed for agentic coding workflows with a custom function call format and repo-scale context understanding. Qwen3.5-4B supports unified vision-language training and  201 languages, making it ideal for lightweight multimodal deployments. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.

Five new Qwen models for coding agents and efficient reasoning are now available in Amazon SageMaker JumpStart

Today, AWS announced the availability of Qwen3-Coder-Next, Qwen3-30B-A3B, Qwen3-30B-A3B-Thinking-2507, Qwen3-Coder-30B-A3B-Instr...

#AWS #AmazonSagemakerJumpstart #Aiml #AmazonSagemaker

5 hours ago 1 0 0 0
Amazon CloudWatch Logs Insights introduces JOIN and sub-query commands Amazon CloudWatch Logs Insights introduces JOIN and sub-query commands to the Logs Insights query language to accelerate log analysis. Customers who need to analyze logs across multiple log groups or correlate data from different sources no longer need to run multiple queries and manually combine the results. With JOIN and sub-query commands, you can accelerate troubleshooting across scenarios such as correlating application and infrastructure errors across different services and log groups, analyzing security events across multiple services, or tracking user sessions across distributed systems. For example, you can use a sub-query to identify services with more than 20 errors in the last day, then use JOIN to correlate those results with performance data from a different log group to calculate average response times, helping you prioritize which high-error services also have the worst performance impact — all in a single query. JOIN and sub-query commands are available today in all commercial AWS Regions. To learn more, see the Amazon CloudWatch Logs https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_LogsInsights.html.

Amazon CloudWatch Logs Insights introduces JOIN and sub-query commands

Amazon CloudWatch Logs Insights introduces JOIN and sub-query commands to the Logs Insights query language to accelerate log analysis. Customers who need to analyze logs across ...

#AWS #AmazonCloudwatchLogs #AmazonCloudwatch

11 hours ago 0 0 0 0
Amazon Location Service now offers bulk address validation for the United States, Canada, Australia, and the United Kingdom Amazon Location Service now offers bulk address validation for the United States, Canada, Australia, and the United Kingdom. Customers can now validate, correct, and standardize large volumes of addresses at scale, whether cleaning customer databases before a CRM migration, verifying shipping addresses to reduce failed deliveries, screening addresses for identity verification and fraud prevention, or improving direct mail targeting and insurance underwriting accuracy. This capability supports use cases across healthcare, financial services, transportation and logistics, retail, and more. Address validation checks addresses against authoritative postal data, corrects common errors like misspellings, missing postal codes, and non-standard abbreviations, and standardizes formatting to match regional postal rules. Each result includes a confidence score and deliverability indicators so applications know exactly what to trust and act on. Using the new Amazon Location Service Jobs API, customers upload their address records to their own Amazon S3 bucket, submit a validation job, and retrieve enriched, standardized results when processing is complete. For addresses in the United States, Canada, and Australia, customers can optionally request position (geocode) coordinates alongside validated address results in the same job. Address validation is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Canada (Central), Europe (London), and South America (São Paulo). To learn more, visit the https://docs.aws.amazon.com/location/latest/developerguide/places-address-validation.html feature page.

Amazon Location Service now offers bulk address validation for the United States, Canada, Australia, and the United Kingdom

Amazon Location Service now offers bulk address validation for the United States, Canada, Australia, and the United Kingdom. Customers can now validate, correct, ...

#AWS #

11 hours ago 0 0 0 0
Introducing the Amazon EKS Hybrid Nodes gateway for hybrid Kubernetes networking https://aws.amazon.com/eks/ (EKS) now offers the Amazon EKS Hybrid Nodes gateway, a feature that automates networking between your Amazon EKS cluster VPC and Kubernetes Pods running on Amazon EKS Hybrid Nodes. The Amazon EKS Hybrid Nodes gateway eliminates the need to make on-premises pod networks routable or coordinate network infrastructure changes when running in hybrid Kubernetes environments. Networking in hybrid Kubernetes environments can be complex, often requiring changes to on-premises routing configurations, coordination with network teams, and ongoing maintenance as workloads scale. The Amazon EKS Hybrid Nodes gateway addresses these challenges by automatically enabling Kubernetes control plane-to-webhook communication, pod-to-pod traffic across cloud and on-premises environments, and connectivity for AWS services such as Application Load Balancers, Network Load Balancers, and Amazon Managed Service for Prometheus. Customers deploy the Amazon EKS Hybrid Nodes gateway to Amazon EC2 instances using Helm, and the gateway automatically maintains VPC route tables as workloads scale. The Amazon EKS Hybrid Nodes gateway codebase is open source. The Amazon EKS Hybrid Nodes gateway is available in all AWS Regions where Amazon EKS Hybrid Nodes is available, except the China Regions. The Amazon EKS Hybrid Nodes gateway is offered at no additional charge. You pay for the underlying AWS infrastructure used to run the gateway, including Amazon EC2 instance charges and any associated data transfer fees. To get started, visit the http://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-gateway-overview.html.

Introducing the Amazon EKS Hybrid Nodes gateway for hybrid Kubernetes networking

https://aws.amazon.com/eks/ (EKS) now offers the Amazon EKS Hybrid Nodes gateway, a feature that automates networking between your Amazon EKS cluster VPC and Kubernetes Pods running on Amazon EKS ...

#AWS #AmazonEks

11 hours ago 0 0 0 0
AWS Glue now supports OAuth 2.0 for Snowflake connectivity Starting today, AWS Glue supports OAuth 2.0 authorization and authentication for native Snowflake connectivity, enabling customers to read from and write to Snowflake without sharing user credentials. This makes it easier for enterprises to maintain security compliance while building data integration pipelines. With OAuth support, you can now securely access Snowflake data within AWS Glue using temporary token-based authorization. AWS Glue provides built-in connector to Snowflake, which helps you to integrate Snowflake data with other sources on a single platform while leveraging the scalability and performance of the AWS Glue Spark engine—all without installing or managing connector libraries. Previously, connecting to Snowflake required using persistent credentials or private keys. With OAuth 2.0 support, you can now eliminate credential management entirely, relying instead on secure, temporary tokens that enhance security and simplify access control. This approach enables granular access control, allowing you to define precise permissions for different users and applications. Additionally, token-based authentication provides improved auditability, making it easier to track and monitor data access patterns across your organization. OAuth 2.0 support for AWS Glue's Snowflake connector is available in all AWS commercial regions where AWS Glue is available. To get started with configuring your AWS Glue Snowflake connection with OAuth, visit the AWS Glue https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect-snowflake-home.html. 

AWS Glue now supports OAuth 2.0 for Snowflake connectivity

Starting today, AWS Glue supports OAuth 2.0 authorization and authentication for native Snowflake connectivity, enabling customers to read from and write to Snowflake without sharing user credentials. This makes it easie...

#AWS #AwsGlue

12 hours ago 0 0 0 0
Amazon CloudWatch pipelines now supports configuration of processors via AI Amazon CloudWatch pipelines now lets you configure log processors using natural language descriptions powered by generative AI. CloudWatch pipelines is a fully managed service that ingests, transforms, and routes log data to CloudWatch without requiring you to manage infrastructure. Setting up the right combination of processors to parse and enrich logs can be time-consuming, especially when working with complex log formats. With AI-assisted configuration, you can simply describe the processing you need in plain language and have the pipeline configuration generated for you automatically. When creating a pipeline in the CloudWatch console, toggle the AI-assisted option during the processing step and enter a natural language description of your desired transformations. The system generates the processor configuration along with a sample log event, so you can immediately verify the output before deploying. This reduces setup time and makes it easier to get your pipelines running correctly without needing deep familiarity with individual processor settings. AI-assisted processor configuration is available at no additional cost in all AWS Regions where CloudWatch pipelines is generally available. Standard CloudWatch Logs ingestion and storage rates still apply. To get started, open the Amazon CloudWatch console, navigate to pipelines under Ingestion, and follow the pipeline wizard. To learn more, see the https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-pipelines.html.

Amazon CloudWatch pipelines now supports configuration of processors via AI

Amazon CloudWatch pipelines now lets you configure log processors using natural language descriptions powered by generative AI. CloudWatch pipelines is a fully managed service that ingests, tran...

#AWS #AmazonCloudwatch

12 hours ago 0 0 0 0
AWS Neuron SDK 2.29.0 now available with NKI and Neuron Explorer out of Beta, a new CPU Simulator, and an expanded NKI Library In this release, AWS Neuron SDK 2.29.0 promotes the Neuron Kernel Interface (NKI) from Beta to Stable with version 0.3.0. NKI gives developers direct, low-level programming access to AWS Trainium and AWS Inferentia NeuronCores using a Python-based syntax. This release introduces the NKI Standard Library, which exposes developer-visible source code for all NKI APIs and native language objects. It also contains a new CPU Simulator that lets developers write, test, and debug NKI kernels locally on standard CPU, without requiring Trainium hardware, using standard Python debugging tools. NKI 0.3.0 also adds new ISA-level features including a dedicated exponential instruction, matmul accumulation control, DMA priority settings for Trn3, and variable-length all-to-all collectives. The NKI Library expands with 7 new experimental kernels covering Conv1D, a multi-layer Transformer token generation megakernel, fused communication-compute primitives for Trainium2, and dynamic tiling operations. Existing kernels also receive improvements. Attention CTE scales to larger batch sizes and sequence lengths, MLP adds mixed-precision quantization paths, and MoE TKG introduces a dynamic all-expert algorithm. For inference, NxD Inference improves vision language model support with optimizations for Qwen3 VL and Qwen2 VL, including text-model sequence parallelism and vision data parallelism. vLLM Neuron Plugin updated to version 0.5.0. Neuron Explorer, Neuron’s profiling and debugging suite of tools, also moves from Beta to Stable. The System Trace Viewer now supports the full set of Device widgets for multi-device profile analysis, and the tool is available on the VS Code Extension Marketplace for streamlined installation. For full release details, see https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/2.29.0.html The SDK is available in all AWS Regions supporting Inferentia and Trainium instances. Learn more: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/nki/index.html https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/vllm/index.html https://awsdocs-neuron.readthedocs-hosted.com/en/latest/tools/neuron-explorer/index.html

AWS Neuron SDK 2.29.0 now available with NKI and Neuron Explorer out of Beta, a new CPU Simulator, and an expanded NKI Library

In this release, AWS Neuron SDK 2.29.0 promotes the Neuron Kernel Interface (NKI) from Beta to Stable with version 0.3.0. NKI gives developers direct, low-level ...

#AWS

13 hours ago 0 0 0 0
Amazon SageMaker now supports multi-region replication from IAM Identity Center Amazon SageMaker now supports multi-region replication from IAM Identity Center (IdC), enabling you to deploy SageMaker Unified Studio domains in different regions from your IdC instance. This new capability empowers enterprise customers, particularly those in regulated industries like financial services and healthcare, to maintain compliance while leveraging centralized workforce identity management. As an Amazon SageMaker Unified Studio administrator, you can deploy SageMaker domains closer to your workforce based on data residency needs while maintaining seamless single sign-on (SSO) access. Organizations can address use cases such as maintaining IdC in one region while processing sensitive data in compliance-required regions, supporting global operations with centralized identity management, and meeting data sovereignty requirements without compromising SSO capabilities. To get started see the https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/user-management.html and to learn about setting up IAM Identity Center multi-Region support see the https://docs.aws.amazon.com/singlesignon/latest/userguide/multi-region-iam-identity-center.html.

Amazon SageMaker now supports multi-region replication from IAM Identity Center

Amazon SageMaker now supports multi-region replication from IAM Identity Center (IdC), enabling you to deploy SageMaker Unified Studio domains in different regions from your IdC instance. Thi...

#AWS #AmazonSagemaker

13 hours ago 0 0 0 0
AWS Transform custom is now available in six additional AWS Regions AWS Transform custom is now available in six additional AWS Regions: Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (London). AWS Transform custom enables organizations to modernize and transform code at scale using AWS-managed and custom transformations. You can upgrade language versions, migrate frameworks, optimize performance, and analyze code bases using transformations that are ready to use or can be customized to meet your organization's specific requirements. These transformations benefit from continuous improvement, learning from each engagement to deliver increasingly accurate and efficient results. With this expansion, AWS Transform custom is now available in a total of eight AWS Regions: US East (N. Virginia), Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (Frankfurt, London). To learn more, visit the AWS Transform https://aws.amazon.com/transform/custom/ and https://docs.aws.amazon.com/transform/latest/userguide/custom.html.

AWS Transform custom is now available in six additional AWS Regions

AWS Transform custom is now available in six additional AWS Regions: Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (London).

AWS Transform custom enables organizations to modernize and tr...

#AWS

14 hours ago 1 0 0 0
Amazon Athena Spark adds support for AWS PrivateLink Amazon Athena Spark now supports https://aws.amazon.com/privatelink/ so that you can access APIs and endpoints from your Amazon Virtual Private Cloud (VPC) without traversing the public internet. This feature can help you meet compliance requirements by allowing you to access and use Athena Spark APIs and endpoints entirely within the AWS network. You can now create AWS PrivateLink interface endpoints to connect from clients in your VPC. The Athena VPC endpoint supports all Athena Spark APIs and endpoints, including the Spark Connect, Spark Live UI and Spark History Server endpoints. Communication between your VPC and Athena Spark APIs and endpoints is then conducted entirely within the AWS network, providing a secure pathway for your data. To get started, you can create an interface VPC endpoint to connect to Amazon Athena Spark using the AWS Management Console or AWS Command Line Interface (AWS CLI) commands or AWS CloudFormation. This new feature is available in all https://docs.aws.amazon.com/athena/latest/ug/notebooks-spark-considerations-and-limitations.html where Amazon Athena Spark and AWS PrivateLink are available. For more information, refer to the https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html and https://docs.aws.amazon.com/athena/latest/ug/athena-spark-vpc-endpoint.html documentation.  

Amazon Athena Spark adds support for AWS PrivateLink

Amazon Athena Spark now supports https://aws.amazon.com/privatelink/ so that you can access APIs and endpoints from your Amazon Virtual Private Cloud (VPC) without traversing the public internet. This feature can help you...

#AWS #AmazonAthena

14 hours ago 0 0 0 0
Advertisement
Amazon Connect expands agentic voice speech-to-speech experiences to three new AWS Regions and ten locales Amazon Connect now expands agentic voice speech-to-speech experiences to three additional AWS Regions: Asia Pacific (Seoul), Asia Pacific (Singapore), and Europe (Frankfurt), along with new locales including Australian English, British English, Singaporean English, Spanish, French, German, Italian, and Korean. With these updates, you can deliver natural, human-like voice AI experiences to a broader range of customers across more regions and languages. Amazon Connect's agentic self-service capabilities enable AI agents to understand, reason, and take action across voice and messaging channels to automate routine and complex service tasks. Connect's agentic speech-to-speech voice AI agents understand not only what your customers say but how they say it, adapting voice responses to match tone and sentiment while maintaining natural conversational pace. To learn more about this feature, see the Amazon Connect https://docs.aws.amazon.com/connect/latest/adminguide/what-is-amazon-connect.html. To learn more about Amazon Connect, AWS’s AI-native customer experience solution, visit the https://aws.amazon.com/connect/?refid=99383dcb-51ed-4091-8972-da30eaa0098f.

Amazon Connect expands agentic voice speech-to-speech experiences to three new AWS Regions and ten locales

Amazon Connect now expands agentic voice speech-to-speech experiences to three additional AWS Regions: Asia Pacific (Seoul), Asia Pacific (Singapore), and Europe (Fra...

#AWS #AmazonConnect

15 hours ago 1 1 0 0
AWS Backup adds Amazon Redshift Serverless and Aurora DSQL support for AWS Organizations backup policies AWS Backup now supports Amazon Redshift Serverless namespaces and Amazon Aurora DSQL clusters as resource types in AWS Organizations backup policies. Organization administrators can now define backup policy rules that directly target these resource types across member accounts. Previously, backing up Redshift Serverless namespaces and Aurora DSQL clusters through organization backup policies required using tag-based selections or backing up all resources in a member account. With this launch, administrators can specify these resource types directly in their backup policy selections, providing more precise control over which resources are included in or excluded from Organization-wide backup plans. This capability is available in all AWS Commercial and GovCloud Regions where AWS Backup and the respective services are available. To get started, visit the https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_backup_syntax.html or the https://console.aws.amazon.com/backup.

AWS Backup adds Amazon Redshift Serverless and Aurora DSQL support for AWS Organizations backup policies

AWS Backup now supports Amazon Redshift Serverless namespaces and Amazon Aurora DSQL clusters as resource types in AWS Organizations backup policies. Orga...

#AWS #AwsOrganizations #AwsBackup

15 hours ago 0 0 0 0
AWS Marketplace streamlines VAT payment for deemed supply transactions AWS Marketplace now offers sellers a streamlined self-service process to submit Value Added Tax (VAT) invoices and receive automated VAT disbursements for deemed supply of digital services in the European Union, Norway, and the United Kingdom. Under the European Union, United Kingdom, and Norwegian VAT laws, when AWS Marketplace facilitates digital service sales, the law creates a deemed supply arrangement between sellers and the marketplace. To receive VAT payment, sellers are required to invoice the relevant AWS Europe, Middle East, and Africa (EMEA) SARL branch facilitating their transaction. This new capability provides sellers a unified experience within AWS Marketplace to submit VAT invoices and receive VAT payments, simplifying tax compliance under deemed supply arrangements. Sellers can now access the new experience through AWS Marketplace Management portal or AWS Partner Central, submit VAT invoices, track invoice status in real-time, and receive automated VAT payments. The system automatically validates invoices against mandatory fields and disburses VAT amounts once buyer payment is received. Sellers can consolidate multiple deemed supply transactions into a single invoice per period, provided they relate to the same AWS EMEA branch and currency. Sellers can also submit invoices before buyer payment is received, with the system automatically processing disbursements when all conditions are met. Enhanced reporting capabilities through the Seller Reports help sellers identify eligible transactions and reconcile disbursements for audit and financial reporting purposes. This launch eliminates the previous manual process and separate platform onboarding while reducing the administrative burden of tracking VAT invoices and payments. This capability is available for transactions where both seller and buyer AWS accounts are located in the same country when transacting via the AWS EMEA branch across 20 jurisdictions: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, and the United Kingdom. To learn more about VAT payment for deemed supply transactions and invoice submission requirements, visit the https://docs.aws.amazon.com/marketplace/latest/userguide/vat-on-deemed-supply.html or https://docs.aws.amazon.com/marketplace/latest/userguide/vat-deemed-supply-faq.html.

AWS Marketplace streamlines VAT payment for deemed supply transactions

AWS Marketplace now offers sellers a streamlined self-service process to submit Value Added Tax (VAT) invoices and receive automated VAT disbursements for deemed supply of digital services in the European Union, Norwa...

#AWS

15 hours ago 0 0 0 0
Amazon Connect adds touchtone buffering for AI-powered self-service Amazon Connect now enables you to automatically pass customer context to personalize self-service experiences from the moment a call connects. When a customer initiates a call from a website, mobile app, or notification link, you can automatically pass context, such as customer IDs, session references, and campaign codes, into the call. AI agents use this context to recognize the caller, understand the reason for the call, take action, and resolve issues without requiring callers to re-identify themselves or repeat why they are calling. To learn more about these features, see the https://docs.aws.amazon.com/connect/latest/adminguide/touchtone-buffering.html. These features are available in all AWS regions where Amazon Connect is available.

Amazon Connect adds touchtone buffering for AI-powered self-service

Amazon Connect now enables you to automatically pass customer context to personalize self-service experiences from the moment a call connects. When a customer initiates a call from a website, mobile app, o...

#AWS #AmazonConnect

16 hours ago 1 0 0 0
Amazon EC2 G7e instances now available in AWS Local Zones in Los Angeles Today, AWS announces the general availability of https://aws.amazon.com/ec2/instance-types/g7e/ in AWS Local Zones in Los Angeles, California. G7e instances feature NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and 5th generation Intel Xeon Scalable (Emerald Rapids) processors, bringing high-performance GPU compute closer to end users in Los Angeles.  For creative workloads, you can use G7e instances to run studio workstation workloads with low-latency access to local storage, and post-production workloads including visual effects (VFX) editorial, color correction, and VFX finishing. G7e instances support enhanced real-time rendering on graphics engines and 2D/3D VFX composition software. For AI workloads, you can also use G7e instances to deploy Large Language Models (LLMs), inference, and agentic AI at the edge.  To get started, opt-in to the Los Angeles Local Zone (us-west-2-lax-1b) from https://us-east-1.console.aws.amazon.com/ec2globalview/home#RegionsAndZones:tabId=lz. You can enable G7e instances from the https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#Overview, https://aws.amazon.com/cli/ (AWS CLI), and AWS SDKs. G7e instances are available through On Demand and Savings Plans. To learn more, visit the https://aws.amazon.com/about-aws/global-infrastructure/localzones/features/?nc=sn&loc=2.

Amazon EC2 G7e instances now available in AWS Local Zones in Los Angeles

Today, AWS announces the general availability of https://aws.amazon.com/ec2/instance-types/g7e/ in AWS Local Zones in Los Angeles, California. G7e instances feature NVIDIA RTX PRO 6000 Blackwell Serve...

#AWS #AwsLocalZones

16 hours ago 0 0 0 0
AWS Lambda Durable Execution SDK for Java GA Today, AWS announces the general availability of the AWS Lambda Durable Execution SDK for Java, empowering Java developers to build resilient, long-running workflows using Lambda durable functions. With this SDK, developers can create multi-step applications like order processing pipelines, AI agent orchestration, and human-in-the-loop approvals directly in their applications without implementing custom progress tracking or integrating external orchestration services. Lambda durable functions extend Lambda's event-driven programming model with operations that checkpoint progress automatically and pause execution for up to a year when waiting on external events. The AWS Lambda Durable Execution SDK for Java provides an idiomatic Java experience for building with Lambda durable functions. It includes steps for progress tracking, callback integration for human and agent-in-the-loop workflows, durable invocation for reliable function chaining, and waits for efficient suspension. The SDK is compatible with Java 17+ and can be deployed using Lambda managed runtimes or functions packaged as container images. The local testing emulator in the SDK enables developers to build and debug locally before deploying to production. To get started, see the https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html and the https://github.com/aws/aws-durable-execution-sdk-java/. For Regional availability and pricing details, see the https://builder.aws.com/build/capabilities and https://aws.amazon.com/lambda/pricing/.

AWS Lambda Durable Execution SDK for Java GA

Today, AWS announces the general availability of the AWS Lambda Durable Execution SDK for Java, empowering Java developers to build resilient, long-running workflows using Lambda durable functions. With this SDK, developers can crea...

#AWS #AwsLambda

17 hours ago 0 0 0 0
AWS Lambda functions can now mount Amazon S3 buckets as file systems with S3 Files AWS Lambda now supports https://aws.amazon.com/s3/features/files/, enabling your Lambda functions to mount Amazon S3 buckets as file systems and perform standard file operations without downloading data for processing. Built using Amazon EFS, S3 Files gives you the performance and simplicity of a file system with the scalability, durability, and cost-effectiveness of S3. Multiple Lambda functions can connect to the same S3 Files file system simultaneously, sharing data through a common workspace without building custom synchronization logic. https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.htmlThe S3 Files integration simplifies stateful workloads in Lambda by eliminating the overhead of downloading objects, uploading results, and managing ephemeral storage limits. This is particularly valuable for AI and machine learning workloads where agents need to persist memory and share state across pipeline steps. https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html make these multi-step AI workflows possible by orchestrating parallel execution with automatic checkpointing. For example, an orchestrator function can clone a repository to a shared workspace while multiple agent functions analyze the code in parallel. The durable function handles checkpointing of execution state while S3 Files provides seamless data sharing across all steps. To use S3 Files with Lambda, configure your function to mount an S3 bucket through the Lambda console, AWS CLI, AWS SDKs, AWS CloudFormation, or AWS Serverless Application Model (SAM). To learn more about how to use S3 Files with your Lambda function, visit the https://docs.aws.amazon.com/lambda/latest/dg/configuration-filesystem-s3files.html.  https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/S3 Files is supported for Lambda functions not configured with a capacity provider, in all https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/ where both Lambda and S3 Files are available, at no additional charge beyond standard Lambda and S3 pricing.

AWS Lambda functions can now mount Amazon S3 buckets as file systems with S3 Files

AWS Lambda now supports https://aws.amazon.com/s3/features/files/ enabling your Lambda functions to mount Amazon S3 buckets as file systems and perform standard file operations without downloa...

#AWS #AwsLambda

17 hours ago 0 0 0 0
Amazon Aurora serverless: Up to 30% better performance, smarter scaling, and still scales to zero https://aws.amazon.com/rds/aurora/serverless/ — the autoscaling database that scales up to support your most demanding workloads and down to zero when you don't need it — just got faster and smarter, with up to 30% better performance than the previous version and enhanced scaling that understands your workload. It's especially well-suited for agentic AI applications, which typically have bursts of activity, long idle windows, and unpredictable patterns. Aurora serverless handles all of it automatically, scaling capacity with your agents rather than against them, and you only pay for what you actually use. When not in use, the database automatically scales down to zero to save cost. With improved performance and scaling, you can now use serverless for even more demanding workloads. The enhanced scaling algorithm enables you to efficiently run workloads where multiple tasks compete for resources, such as busy web applications and API services. These improvements are available in platform version 4 at no additional cost. All new clusters, database restores, and new clones will automatically launch on platform version 4. Existing clusters on platform version 1, 2, or 3 can upgrade directly to platform version 4 by using pending maintenance action, stopping and restarting the cluster, or using blue/green deployments. You can verify your cluster's platform version in the AWS Console under instance configuration section or via the RDS API's ServerlessV2PlatformVersion parameter. To learn more, read the https://aws.amazon.com/blogs/database/aurora-serverless-faster-performance-enhanced-scaling-and-still-scales-down-to-zero/. Aurora serverless is an on-demand, automatic scaling configuration for https://aws.amazon.com/rds/aurora/. For pricing details and Region availability, visit https://aws.amazon.com/rds/aurora/pricing/. To learn more, read the https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-works.capacity, and get started by creating an Aurora serverless database using only a few steps in the https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome%3FhashArgs%3D%2523%26isauthcode%3Dtrue%26state%3DhashArgsFromTB_us-east-2_a53ff6382623e9a3&client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fcanvas&forceMobileApp=0&code_challenge=SCavOinSDkGpz8021yaDgWGy_E5aIbgB7HoRsIR47N0&code_challenge_method=SHA-256.

Amazon Aurora serverless: Up to 30% better performance, smarter scaling, and still scales to zero

https://aws.amazon.com/rds/aurora/serverless/ — the autoscaling database that scales up to support your most demanding workloads and down to zero when you don't need it — j...

#AWS #AmazonAurora

20 hours ago 0 0 0 0
Amazon Connect outbound campaigns now supports hourly segment refresh Amazon Connect Outbound Campaigns now allows you to refresh campaign segments as frequently as every hour, reduced from the previous minimum of 24 hours. This enables campaigns to reach newly eligible customers throughout the day rather than waiting for the next daily run. With hourly segment refresh, your campaigns stay current with changing business conditions across all campaign types. A collections team can start outreach to newly delinquent accounts the same afternoon they are flagged. A healthcare provider can begin appointment reminder calls within an hour of a new booking. A multi-step journey, such as sending an SMS reminder followed by a voice call if the customer doesn't respond, can enroll new customers throughout the day instead of in a single daily batch.   This capability is available in all AWS Regions where Amazon Connect Outbound Campaigns is offered at no additional cost. To get started, enable the Refresh option in your campaign configuration in the Amazon Connect console or via the API. To learn more, see the https://docs.aws.amazon.com/connect/latest/adminguide/how-to-create-campaigns.html#schedule-campaign. 

Amazon Connect outbound campaigns now supports hourly segment refresh

Amazon Connect Outbound Campaigns now allows you to refresh campaign segments as frequently as every hour, reduced from the previous minimum of 24 hours. This enables campaigns to reach newly el...

#AWS #AmazonConnect

1 day ago 0 0 0 0
Advertisement
Amazon Connect Outbound Campaigns now supports contact priority ordering Amazon Connect Outbound Campaigns now allows you to dial contacts in configurable priority order based on up to 10 profile attributes for voice campaigns and voice activities in journeys. This helps you focus agent time on the most valuable customers or time-sensitive opportunities, improving campaign effectiveness and conversion rates. With contact priority ordering, you can sort segments on attributes such as customer lifetime value, account tier, or appointment date. For example, a financial services team can prioritize outreach to high-value accounts nearing contract renewal, or a healthcare provider can ensure patients with the earliest upcoming appointments are contacted first. Initial dial attempts always take precedence over reattempts, ensuring your priority order is maintained throughout campaign execution.  This capability is available in all AWS Regions where Amazon Connect Outbound Campaigns is offered at no additional cost. To get started, configure sort attributes when building segments in Amazon Connect Customer Profiles. To learn more, see the Amazon Connect https://docs.aws.amazon.com/connect/latest/adminguide/outbound-campaign-best-practices.html   and https://docs.aws.amazon.com/connect/latest/adminguide/customer-segments-building-segments.html. 

Amazon Connect Outbound Campaigns now supports contact priority ordering

Amazon Connect Outbound Campaigns now allows you to dial contacts in configurable priority order based on up to 10 profile attributes for voice campaigns and voice activities in journeys. This he...

#AWS #AmazonConnect

1 day ago 0 0 0 0
AWS Managed Microsoft AD now supports Kerberos Encryption audit event logs Starting today, AWS Managed Microsoft AD supports forwarding Kerberos Encryption audit event logs (Event IDs 201–209) to Amazon CloudWatch Logs. These logs provide visibility into the encryption types used by your applications and services, helping you identify which resources are using RC4 encryption versus AES encryption. This visibility allows you to decide whether to upgrade clients to AES encryption (recommended for improved security) or maintain RC4 support based on your environment's compatibility requirements. To get started, navigate to your AWS Managed Microsoft AD directory Network and Security tab in the AWS Directory Service console and enable log forwarding to Amazon CloudWatch Logs. You can then review the Kerberos Encryption audit events to understand your current encryption settings. To learn more, see https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_enable_log_forwarding.html This feature is available in all https://docs.aws.amazon.com/directoryservice/latest/admin-guide/regions.html where AWS Managed Microsoft AD is available, except in the Middle East (UAE) and Middle East (Bahrain) Regions.

AWS Managed Microsoft AD now supports Kerberos Encryption audit event logs

Starting today, AWS Managed Microsoft AD supports forwarding Kerberos Encryption audit event logs (Event IDs 201–209) to Amazon CloudWatch Logs. These logs provide visibility into the encr...

#AWS #AwsDirectoryService

1 day ago 0 0 0 0
MSK Replicator now supports replication from external Apache Kafka clusters to MSK Express Brokers Amazon MSK Replicator now supports data replication from external Apache Kafka clusters—including on-premises, self-managed on AWS, or other cloud providers—to https://aws.amazon.com/msk/features/express-brokers-for-amazon-msk/. This capability simplifies workload migration to MSK Express Brokers, supports disaster recovery by using MSK Express-based clusters as a failover or backup target, and enables data distribution across hybrid and multi-cloud environments.  MSK Replicator is a feature of Amazon MSK that automates data replication between Kafka clusters, eliminating the need to manage custom replication infrastructure or configure open-source tools. MSK Express brokers are designed to deliver up to 3 times more throughput per broker, scale up to 20 times faster, and reduce recovery time by 90 percent as compared to Standard brokers running Apache Kafka. With this launch, you can now use MSK Replicator to replicate data from external Kafka clusters to Express brokers on Amazon MSK. You can also use MSK Replicator to replicate data from Amazon MSK Express to external Kafka clusters for reliable failback or multi-cloud data distribution. Unlike self-managed replication tools, MSK Replicator lets you retain your original Kafka topic names during replication while automatically avoiding infinite replication loops. It also synchronizes consumer group offsets bidirectionally, enabling you to move producers and consumers across clusters independently, in any order, without coordination constraints or the risk of data loss. This new capability is supported in all AWS Regions where MSK Express brokers are available.   Watch a https://www.youtube.com/watch?v=GKFFXzZptGw to see it in action, or visit the MSK Replicator https://docs.aws.amazon.com/msk/latest/developerguide/msk-replicator.html, https://aws.amazon.com/msk/features/msk-replicator/, https://aws.amazon.com/msk/pricing/, and this https://aws.amazon.com/blogs/big-data/migrate-third-party-and-self-managed-apache-kafka-clusters-to-amazon-msk-express-brokers-with-amazon-msk-replicator/ to learn more.

MSK Replicator now supports replication from external Apache Kafka clusters to MSK Express Brokers

Amazon MSK Replicator now supports data replication from external Apache Kafka clusters—including on-premises, self-managed on AWS, or other cloud providers—to https://a

#AWS #AmazonMsk

1 day ago 0 0 0 0
Amazon MSK Replicator now supports enhanced consumer offset synchronization for bidirectional replication Amazon MSK Replicator now provides enhanced consumer offset synchronization for bidirectional replication, enabling applications to resume processing from the correct position when moving across Kafka clusters. This capability enables you to move producer and consumer applications between clusters independently, in any order, without the risk of data loss.   MSK Replicator is a feature of Amazon MSK that automates data replication between Kafka clusters, eliminating the need to manage custom replication infrastructure or configure open-source tools. Previously, while replicating bidirectionally with MSK Replicator, consumer group offsets were synchronized only when producers and consumers were active on the same cluster, requiring careful sequencing of application migrations between clusters and increasing the risk of duplicate message processing during rollbacks. With this launch, MSK Replicator synchronizes consumer group offsets across source and target clusters regardless of where producers are running, enabling applications to move between clusters without coordination constraints or data duplication risks. You can enable enhanced consumer offset synchronization when creating a Replicator using the Amazon MSK console, AWS CLI, or AWS CloudFormation. This capability is supported in all AWS Regions where MSK Replicator is available.  To learn more, visit the MSK Replicator https://docs.aws.amazon.com/msk/latest/developerguide/msk-replicator.html, https://aws.amazon.com/msk/features/msk-replicator/, https://aws.amazon.com/msk/pricing/, and this https://aws.amazon.com/blogs/big-data/migrate-third-party-and-self-managed-apache-kafka-clusters-to-amazon-msk-express-brokers-with-amazon-msk-replicator/.

Amazon MSK Replicator now supports enhanced consumer offset synchronization for bidirectional replication

Amazon MSK Replicator now provides enhanced consumer offset synchronization for bidirectional replication, enabling applications to resume processing from the correct posi...

#AWS #AmazonMsk

1 day ago 0 0 0 0
Amazon MSK Replicator now supports log forwarding for replication visibility Amazon MSK Replicator now delivers replicator logs to give you end-to-end visibility into replication health. Replicator logs surface critical replication events and errors along with guidance on how to resolve each issue, enabling you to troubleshoot faster without requiring AWS Support.  MSK Replicator is a feature of Amazon MSK that automates data replication between Kafka clusters, eliminating the need to manage custom replication infrastructure or configure open-source tools. Until now, you could use Amazon CloudWatch metrics to track replication progress and get visibility into replication health. With this launch, MSK Replicator further simplifies diagnosing issues during replication with actionable log entries that surface the most common replication errors including insufficient permissions on source topics, partition quota exhaustion on target clusters, and records exceeding size limits, along with prescriptive guidance on how to resolve each issue. MSK Replicator also logs steady-state replication activity including offset commits, topic discovery events, and any errors or warnings from Kafka clients used internally by the replicator, giving you end-to-end visibility into replication health. You can enable log delivery when creating or updating a Replicator using the Amazon MSK console, AWS CLI, or AWS CloudFormation and forward logs to Amazon CloudWatch, Amazon S3, or Amazon Data Firehose.  This capability is supported in all AWS Regions where MSK Replicator is available. Log delivery costs depend on the destination service you choose, refer to the pricing pages for https://aws.amazon.com/cloudwatch/pricing/, https://aws.amazon.com/s3/pricing/, and https://aws.amazon.com/firehose/pricing/.  To learn more, visit the MSK Replicator https://docs.aws.amazon.com/msk/latest/developerguide/msk-replicator.html, and https://aws.amazon.com/msk/features/msk-replicator/

Amazon MSK Replicator now supports log forwarding for replication visibility

Amazon MSK Replicator now delivers replicator logs to give you end-to-end visibility into replication health. Replicator logs surface critical replication events and errors along with guidance on ho...

#AWS #AmazonMsk

1 day ago 0 0 0 0
AWS IoT Greengrass v2.17 now supports non-root installation and introduces new light weight components https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-release-2026-04-16.html is now available, enabling you to run the edge runtime as a non-root user on Linux systems and deploy lighter-weight components that use significantly less memory. AWS IoT Greengrass is an Internet of Things (IoT) edge runtime and cloud service that helps customers build, deploy, and manage device software at the edge. With this release, you can install and run AWS IoT Greengrass v2.17 as a non-root user, making it easy for you to meet security requirements in enterprise and regulated environments where root access is prohibited. The release also adds an uninstall life cycle capability that automatically activates when you remove a component from a device, simplifying dependency management. Moreover, the release introduces the following new nucleus lite capabilities to reduce resource consumption at the edge: Secure Tunneling lite component that uses just 4MB of memory, down from 36MB in the standard component. Updated Fleet Provisioning component that supports Trusted Platform Module (TPM) 2.0 for cryptographic operations and secure device identity management. PKCS#11 (Public Key Cryptographic Standard) interface that enables AWS IoT Greengrass nucleus lite component to easily authenticate with AWS IoT Core using keys and certificates stored in a Hardware Security Module (HSM). AWS IoT Greengrass v2.17 is available in all AWS Regions where AWS IoT Greengrass is offered. To learn more about AWS IoT Greengrass v2.17 and its new features, visit the AWS IoT Greengrass https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html Follow the Getting Started https://docs.aws.amazon.com/greengrass/v2/developerguide/getting-started.html for a quick introduction to AWS IoT Greengrass.

AWS IoT Greengrass v2.17 now supports non-root installation and introduces new light weight components

docs.aws.amazon.com/greengrass/v2/developerg... now available, enabling you to run the edge runtime as a non-root user on ...

#AWS #AwsGreengrass

1 day ago 0 0 0 0
Amazon EVS now offers Microsoft Windows Server Licensing Today, we're announcing that Amazon Elastic VMware Service (Amazon EVS) now offers Microsoft Windows Server licensing entitlements. You can now migrate or create new virtual machines (VMs) running Windows Server OS in EVS and obtain Windows Server licensing entitlements for those VMs from AWS. Amazon EVS lets you run VMware Cloud Foundation (VCF) directly within your Amazon Virtual Private Cloud (VPC) on EC2 bare-metal instances, powered by AWS Nitro. Using either our step-by-step configuration workflow or the AWS Command Line Interface (CLI), you can set up a complete VCF environment in just a few hours. This rapid deployment enables faster workload migration to AWS, helping you eliminate aging infrastructure, reduce operational risks, and meet critical timelines for exiting your data center. With this latest functionality, you can now entitle your Windows Server VMs on Amazon EVS with Microsoft Windows Server. You can configure an EVS connector to your VMware vCenter Server and provide the VM IDs for those Window Server VMs you want to entitle through the Amazon EVS console or AWS CLI. Pay for only what your VMs use, on a per vCPU-hour basis. Add or remove entitlement for your VMs at any time, giving you flexibility to manage costs as your environment evolves.  This newest release provides you with greater flexibility when migrating to AWS, helping meet critical data center exit timelines while maintaining your familiar VMware environment.  This feature is available in all AWS Regions where https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/. For more details, read the step-by-step walkthrough on the https://aws.amazon.com/blogs/migration-and-modernization/whats-new-amazon-evs-now-offers-windows-server-licensing-for-your-vmware-migrations/ Visit the Amazon EVS https://aws.amazon.com/evs/ and https://docs.aws.amazon.com/evs/latest/userguide/what-is-evs.html. to learn more about Amazon EVS. 

Amazon EVS now offers Microsoft Windows Server Licensing

Today, we're announcing that Amazon Elastic VMware Service (Amazon EVS) now offers Microsoft Windows Server licensing entitlements. You can now migrate or create new virtual machines (VMs) running Windows Server OS in EVS and obtai...

#AWS

1 day ago 1 1 1 0
Amazon EBS expands volume modification enhancement to AWS European Sovereign Cloud Region Amazon Elastic Block Store (Amazon EBS) now supports up to four Elastic Volumes modifications per volume within a rolling 24-hour window in AWS European Sovereign Cloud (Germany) Region. Elastic Volumes modifications allow you to increase the size, change the type, and adjust the performance of your EBS volumes. With this update, you can start a new modification immediately after the previous one completes, as long as you have initiated fewer than four modifications in the past 24 hours. This enhancement improves your operational agility to immediately scale storage capacity or adjust performance in response to sudden data growth or unanticipated workload spikes. With Elastic Volumes modifications, you can modify your volumes without detaching them or restarting your instances, allowing your application to continue running with minimal performance impact. The Elastic Volumes modifications enhancement is automatically available in the Region without requiring changes to your existing workflows. To learn more, see https://docs.aws.amazon.com/ebs/latest/userguide/ebs-modify-volume.html in the Amazon EBS User Guide.

Amazon EBS expands volume modification enhancement to AWS European Sovereign Cloud Region

Amazon Elastic Block Store (Amazon EBS) now supports up to four Elastic Volumes modifications per volume within a rolling 24-hour window in AWS European Sovereign Cloud (Ger...

#AWS #AmazonElasticBlockStore

1 day ago 0 0 0 0
Advertisement
AWS Managed Microsoft AD is now available on Windows functional level 2016 Starting today, all AWS Directory Service for Microsoft AD (AWS Managed Microsoft AD) directories run on Windows functional level 2016. The upgrade to Windows functional level 2016 has been applied automatically to all existing AWS Managed Microsoft AD directories. The functional level upgrade includes enhanced authentication mechanisms and improved security for privileged access management, helping you better protect your Active Directory infrastructure in the cloud.  This upgrade provides LAPS (Local Administrator Password Solution), which helps you manage local administrator passwords on domain-joined computers by automatically generating unique, complex passwords, and storing them securely in Active Directory. This is enabled in all https://docs.aws.amazon.com/directoryservice/latest/admin-guide/regions.html where AWS Managed Microsoft AD is available, except in the Middle East (UAE) and Middle East (Bahrain) Regions. To learn more, see the https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_key_concepts.html.

AWS Managed Microsoft AD is now available on Windows functional level 2016

Starting today, all AWS Directory Service for Microsoft AD (AWS Managed Microsoft AD) directories run on Windows functional level 2016. The upgrade to Windows functional level 2016 has been a...

#AWS #AwsDirectoryService

1 day ago 0 0 0 0
Amazon EKS enhances cluster governance with new IAM condition keys https://aws.amazon.com/eks/ (EKS) now supports seven additional https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-policy-keys for cluster creation and configuration APIs, enhancing the governance controls available through IAM policies and Service Control Policies (SCPs). Organizations managing multi-account environments require centralized mechanisms to enforce security and compliance requirements consistently across all clusters without relying on manual processes or post-deployment checks. This expansion of EKS IAM condition keys further enables proactive policy enforcement, providing organizations with more granular control to establish guardrails for cluster configurations. Organizations can now enforce private-only API endpoints (eks:endpointPublicAccess, eks:endpointPrivateAccess), require customer-managed AWS KMS keys for secrets encryption (eks:encryptionConfigProviderKeyArns), restrict clusters to approved Kubernetes versions (eks:kubernetesVersion), mandate deletion protection for production workloads (eks:deletionProtection), specify control plane scaling tiers (eks:controlPlaneScalingTier), and enable zonal shift capabilities for high availability (eks:zonalShiftEnabled). These condition keys apply to CreateCluster, UpdateClusterConfig, UpdateClusterVersion, and AssociateEncryptionConfig APIs, integrating seamlessly with AWS Organizations SCPs for centralized governance across accounts. The new IAM condition keys are available in all AWS Regions where Amazon EKS is available at no additional charge. To learn more about Amazon EKS IAM condition keys, see the Amazon EKS https://docs.aws.amazon.com/eks/latest/userguide/security-iam-service-with-iam.html#security-iam-service-with-iam-id-based-policies and the https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html for Amazon EKS. For information about implementing Service Control Policies, see the https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html. 

Amazon EKS enhances cluster governance with new IAM condition keys

https://aws.amazon.com/eks/ (EKS) now supports seven additional docs.aws.amazon.com/service-authorization/la...

#AWS #AmazonEks

1 day ago 0 0 0 0
Amazon DocumentDB (with MongoDB compatibility) now supports in-place upgrade from version 5.0 to 8.0 https://aws.amazon.com/documentdb/ supports in-place major version upgrade (MVU) from version 5.0 to 8.0. You can upgrade with just a few clicks in the AWS Management Console or via the AWS SDK or AWS CLI — no new clusters, no endpoint changes, and no index rebuilds required. Upgrading to version 8.0 delivers performance and cost improvements: query latency improves by up to 7x and storage compression improves by up to 5x, so your applications run faster on less storage, reducing your costs. Version 8.0 also adds new capabilities including collation, views, new aggregation stages and operators, enhanced text search with text index v2, and vector index builds that are up to 30x faster. In-place MVU from version 5.0 to 8.0 is available in all AWS Regions where Amazon DocumentDB 8.0 is available, at no additional cost. To get started, see the https://docs.aws.amazon.com/documentdb/latest/developerguide/docdb-mvu.html. To learn more about Amazon DocumentDB 8.0, visit the https://docs.aws.amazon.com/documentdb/latest/developerguide/compatibility.html#compatibility-whatsnew-8.

Amazon DocumentDB (with MongoDB compatibility) now supports in-place upgrade from version 5.0 to 8.0

https://aws.amazon.com/documentdb/ supports in-place major version upgrade (MVU) from version 5.0 to 8.0. You can upgrade with just a few clicks in the AWS Management C...

#AWS #AmazonDocumentdb

1 day ago 0 0 0 0