Advertisement · 728 × 90
#
Hashtag
#AmazonSagemaker
Advertisement · 728 × 90
Preview
Amazon SageMaker Studio launches support for Kiro and Cursor IDEs as remote IDEs Today, AWS announces the ability to remotely connect from Kiro and Cursor IDEs to Amazon SageMaker Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro and Cursor setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker Studio. By connecting Kiro and Cursor to SageMaker Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Studio, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software), and VS Code IDE as remote IDE. Starting today, you can also use your customized local Kiro and Cursor setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. You can authenticate using the AWS Toolkit extension in Kiro or Cursor or through SageMaker Studio's web interface. Once authenticated, connect to any of your SageMaker Studio development environments in a few simple clicks. You maintain the same security boundaries as SageMaker Studio’s web-based environments while developing AI models and analyzing data in local IDE of your choice - Kiro or Cursor. To learn more, refer to the SageMaker user guide.

🆕 AWS now lets you connect Kiro and Cursor IDEs to Amazon SageMaker Studio for remote development, enabling seamless access to SageMaker's scalable resources while maintaining your local setup's spec-driven features, eliminating context switching.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker Studio launches support for Kiro and Cursor IDEs as remote IDEs Today, AWS announces the ability to remotely connect from Kiro and Cursor IDEs to Amazon SageMaker Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro and Cursor setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker Studio. By connecting Kiro and Cursor to SageMaker Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Studio, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software), and VS Code IDE as remote IDE. Starting today, you can also use your customized local Kiro and Cursor setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. You can authenticate using the AWS Toolkit extension in Kiro or Cursor or through SageMaker Studio's web interface. Once authenticated, connect to any of your SageMaker Studio development environments in a few simple clicks. You maintain the same security boundaries as SageMaker Studio’s web-based environments while developing AI models and analyzing data in local IDE of your choice - Kiro or Cursor. To learn more, refer to the https://docs.aws.amazon.com/sagemaker/latest/dg/remote-access.html.

Amazon SageMaker Studio launches support for Kiro and Cursor IDEs as remote IDEs

Today, AWS announces the ability to remotely connect from Kiro and Cursor IDEs to Amazon SageMaker Studio. This new capability allows data scientists, ML engineers, and developers to leverag...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio launches support for remote connection from Cursor IDE Today, AWS announces remote connection from Cursor IDE to Amazon SageMaker Unified Studio via the AWS Toolkit extension. This new capability allows data scientists, ML engineers, and developers to leverage their Cursor setup - including its AI-powered code completion, natural language editing, and multi-file editing capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Cursor to SageMaker Unified Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing AI-assisted development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Cursor setup - complete with custom rules, extensions, and AI model preferences - while accessing your compute resources and data on Amazon SageMaker. Since Cursor is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration. This feature is available in all AWS Regions where Amazon SageMaker Unified Studio is available. To learn more, visit the local IDE support documentation..

🆕 AWS unveils Cursor IDE for remote access to Amazon SageMaker Unified Studio via AWS Toolkit, streamlining AI workflows, cutting context switching, and ensuring enterprise security. Available globally with SageMaker Unified Studio.

#AWS #AmazonSagemaker

2 0 1 0
Amazon SageMaker Unified Studio launches support for remote connection from Cursor IDE Today, AWS announces remote connection from Cursor IDE to Amazon SageMaker Unified Studio via the AWS Toolkit extension. This new capability allows data scientists, ML engineers, and developers to leverage their Cursor setup - including its AI-powered code completion, natural language editing, and multi-file editing capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Cursor to SageMaker Unified Studio using the AWS Toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing AI-assisted development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Cursor setup - complete with custom rules, extensions, and AI model preferences - while accessing your compute resources and data on Amazon SageMaker. Since Cursor is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration. This feature is available in https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/supported-regions.html. To learn more, visit the https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/local-ide-support.html..

Amazon SageMaker Unified Studio launches support for remote connection from Cursor IDE

Today, AWS announces remote connection from Cursor IDE to Amazon SageMaker Unified Studio via the AWS Toolkit extension. This new capability allows data scientists, ML engineers, and d...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker AI now supports serverless reinforcement fine-tuning for 12 additional models Amazon SageMaker AI now supports serverless model customization and reinforcement fine-tuning for 12 additional open-weight models, enabling you to fine-tune and evaluate them without provisioning or managing infrastructure. The newly supported models are: gpt-oss-120b, Qwen2.5 72B Instruct, DeepSeek-R1-Distill-Llama-70B, Qwen3 14B, DeepSeek-R1-Distill-Qwen-14B, Qwen2.5 14B Instruct, DeepSeek-R1-Distill-Llama-8B, DeepSeek-R1-Distill-Qwen-7B, Qwen3 4B, Meta Llama 3.2 3B Instruct, Qwen3 1.7B, and DeepSeek-R1-Distill-Qwen-1.5B. With this expansion, you can customize these models using supervised fine-tuning (SFT), direct preference optimization (DPO), and reinforcement fine-tuning (RFT) techniques including RLVR and RLAIF, and only pay for what you use. Reinforcement fine-tuning enables you to align models to complex, domain-specific reasoning tasks where techniques such as traditional SFT alone fall short. With RLVR, you can improve model accuracy on verifiable tasks such as code generation, math, and structured extraction by providing reward signals based on correctness. RLAIF uses AI-generated feedback to steer model behavior toward your quality and safety preferences. These techniques are available on previously supported and newly added models, with no cluster setup, capacity planning, or distributed training expertise required. These models and fine-tuning techniques are available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and EU (Ireland). To get started, see the Amazon SageMaker AI model customization product page and visit the Amazon SageMaker AI pricing page (Model Customization tab) to see the full list of models, techniques, and prices.

🆕 Amazon SageMaker AI now supports serverless reinforcement fine-tuning for 12 new models, allowing customization without infrastructure. Pay only for usage. Available in US, Asia Pacific, and EU regions. See Amazon SageMaker AI for details.

#AWS #AmazonSagemaker #AmazonMachineLearning

1 0 0 0
Amazon SageMaker AI now supports serverless reinforcement fine-tuning for 12 additional models Amazon SageMaker AI now supports serverless model customization and reinforcement fine-tuning for 12 additional open-weight models, enabling you to fine-tune and evaluate them without provisioning or managing infrastructure. The newly supported models are: gpt-oss-120b, Qwen2.5 72B Instruct, DeepSeek-R1-Distill-Llama-70B, Qwen3 14B, DeepSeek-R1-Distill-Qwen-14B, Qwen2.5 14B Instruct, DeepSeek-R1-Distill-Llama-8B, DeepSeek-R1-Distill-Qwen-7B, Qwen3 4B, Meta Llama 3.2 3B Instruct, Qwen3 1.7B, and DeepSeek-R1-Distill-Qwen-1.5B. With this expansion, you can customize these models using supervised fine-tuning (SFT), direct preference optimization (DPO), and reinforcement fine-tuning (RFT) techniques including RLVR and RLAIF, and only pay for what you use. Reinforcement fine-tuning enables you to align models to complex, domain-specific reasoning tasks where techniques such as traditional SFT alone fall short. With RLVR, you can improve model accuracy on verifiable tasks such as code generation, math, and structured extraction by providing reward signals based on correctness. RLAIF uses AI-generated feedback to steer model behavior toward your quality and safety preferences. These techniques are available on previously supported and newly added models, with no cluster setup, capacity planning, or distributed training expertise required. These models and fine-tuning techniques are available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and EU (Ireland). To get started, see the Amazon SageMaker AI model customization https://aws.amazon.com/sagemaker/ai/model-customization/ and visit the Amazon SageMaker AI https://aws.amazon.com/sagemaker/ai/pricing/ (Model Customization tab) to see the full list of models, techniques, and prices. 

Amazon SageMaker AI now supports serverless reinforcement fine-tuning for 12 additional models

Amazon SageMaker AI now supports serverless model customization and reinforcement fine-tuning for 12 additional open-weight models, enabling you to fine-...

#AWS #AmazonSagemaker #AmazonMachineLearning

1 0 0 0
AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026) Last week, my team met many developers at Developer Week in San Jose. My colleague, Vinicius Senger delivered a great keynote about renascent software—a new way of building and evolving applications where humans and AI collaborate as co-developers using Kiro. Other colleagues, Du’An Lightfoot, Elizabeth Fuentes, Laura Salinas, and Sandhya Subramani spoke about building and […]

AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026)

Last week, my team met many developers at Developer Week in ...

#AWS #AmazonAurora #AmazonBedrock #AmazonEc2 #AmazonNova #AmazonSagemaker #Launch #News #WeekInReview

0 0 0 0
Post image

I've just finished "Amazon SageMaker" course on Udemy #training #formacion #AWS #AmazonSageMaker #MachineLearning #ML

1 0 0 0
Preview
SageMaker Training Plans now enables extending of existing capacity commitments without workload reconfiguration SageMaker Training Plans allows you to reserve GPU capacity within specified time frames in cluster sizes of up to 64 instances. Today, Amazon SageMaker AI announces that Training Plans can now be extended when your AI workloads take longer than anticipated, ensuring uninterrupted access to capacity. You can extend plans by 1-day increments up to 14 days, or 7-day increments up to 182 days (26 weeks). Extensions can be initiated via API or the SageMaker console. Once the extension is purchased the workload continues to run un-interrupted without you needing to reconfgure the workload. SageMaker AI helps you create the most cost-efficient training plans that fits within your timeline and AI budget. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the AI workloads on these compute resources without requiring any manual intervention. See the SageMaker AI pricing page for a detailed breakdown of instance availability by AWS Region. To learn more about training plan extensions, see the Amazon SageMaker Training Plans User Guide

🆕 Amazon SageMaker Training Plans now allows extending GPU capacity reservations by 1-7 days up to 182 days without reconfiguring workloads. Extensions can be initiated via API or console, ensuring uninterrupted AI training.

#AWS #AmazonSagemaker

0 0 0 0
SageMaker Training Plans now enables extending of existing capacity commitments without workload reconfiguration https://docs.aws.amazon.com/sagemaker/latest/dg/reserve-capacity-with-training-plans.html allows you to reserve GPU capacity within specified time frames in cluster sizes of up to 64 instances. Today, Amazon SageMaker AI announces that Training Plans can now be extended when your AI workloads take longer than anticipated, ensuring uninterrupted access to capacity. You can extend plans by 1-day increments up to 14 days, or 7-day increments up to 182 days (26 weeks). Extensions can be initiated via API or the SageMaker console. Once the extension is purchased the workload continues to run un-interrupted without you needing to reconfgure the workload. SageMaker AI helps you create the most cost-efficient training plans that fits within your timeline and AI budget. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the AI workloads on these compute resources without requiring any manual intervention. See the https://aws.amazon.com/sagemaker/ai/pricing/ for a detailed breakdown of instance availability by AWS Region. To learn more about training plan extensions, see the https://docs.aws.amazon.com/sagemaker/latest/dg/reserve-capacity-with-training-plans.html

SageMaker Training Plans now enables extending of existing capacity commitments without workload reconfiguration

docs.aws.amazon.com/sagemaker/latest/dg/rese... allows you to reserve GPU capacity within specified time frames in c...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio now supports faster data preview in Visual ETL Amazon SageMaker Unified Studio introduces data preview v2.0 for Visual ETL, a new data preview mode that delivers near-instant results when building and iterating on visual ETL jobs. With data preview v2.0, data engineers and analysts can see the output of each transform in about one second, with no session startup required and at no additional compute cost. Data preview v2.0 uses an in-browser query engine to load and process data locally, removing the dependency on server-side Spark sessions for preview operations. Source data is fetched once and cached in the browser, so subsequent transforms apply instantly without re-querying the underlying data source. For Amazon Redshift users, this means you can iterate on transforms without additional queries against your Redshift cluster, keeping your preview workflow fast and your cluster resources focused on production workloads. Data preview v2.0 supports CSV, Parquet, and JSON files from Amazon S3, in addition to data from Amazon Redshift, Amazon S3 Tables, AWS Glue Data Catalog, and third-party sources including Snowflake, MySQL, PostgreSQL, SQL Server, Oracle, Google BigQuery, Amazon DynamoDB, and Amazon DocumentDB. A toggle in the Visual ETL editor gives you the option to switch between data preview v2.0 and the original Spark-based preview at any time. Data preview v2.0 in Visual ETL is available in all AWS Regions where Amazon SageMaker Unified Studio is supported. To learn more, visit the Amazon SageMaker Unified Studio documentation.

🆕 Amazon SageMaker Unified Studio's v2.0 data preview speeds up Visual ETL, delivering near-instant transform results in one second. It uses an in-browser query engine, caches data, and supports CSV, Parquet, JSON, and multiple sources. Available in all AWS Regions.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker Unified Studio now supports faster data preview in Visual ETL https://aws.amazon.com/sagemaker/unified-studio/introduces data preview v2.0 for Visual ETL, a new data preview mode that delivers near-instant results when building and iterating on visual ETL jobs. With data preview v2.0, data engineers and analysts can see the output of each transform in about one second, with no session startup required and at no additional compute cost. Data preview v2.0 uses an in-browser query engine to load and process data locally, removing the dependency on server-side Spark sessions for preview operations. Source data is fetched once and cached in the browser, so subsequent transforms apply instantly without re-querying the underlying data source. For Amazon Redshift users, this means you can iterate on transforms without additional queries against your Redshift cluster, keeping your preview workflow fast and your cluster resources focused on production workloads. Data preview v2.0 supports CSV, Parquet, and JSON files from Amazon S3, in addition to data from Amazon Redshift, Amazon S3 Tables, AWS Glue Data Catalog, and third-party sources including Snowflake, MySQL, PostgreSQL, SQL Server, Oracle, Google BigQuery, Amazon DynamoDB, and Amazon DocumentDB. A toggle in the Visual ETL editor gives you the option to switch between data preview v2.0 and the original Spark-based preview at any time. Data preview v2.0 in Visual ETL is available in all AWS Regions where Amazon SageMaker Unified Studio is supported. To learn more, visit the Amazon SageMaker Unified Studio https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/visual-etl-data-previews.html.

Amazon SageMaker Unified Studio now supports faster data preview in Visual ETL

aws.amazon.com/sagemaker/unified-studio... data preview v2.0 for Visual ETL, a new data preview mode that delivers near-instant results when building and iterating on visual ET...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio adds light mode support for IAM-based domains Today, AWS announces light mode support in Amazon SageMaker Unified Studio for IAM-based domains. Customers can now configure the visual interface mode to match their preference, choosing between dark and light themes. Light mode helps improve readability in bright environments and provides a familiar visual experience for customers who prefer lighter interfaces. Combined with the existing dark mode, this update gives you full control over your development environment's appearance, improving accessibility and reducing eye strain across varying lighting conditions. In SageMaker Unified Studio settings, you can click on 'customize appearance' under your Profile settings to choose between visual modes including dark and light. The setting persists across browsers and devices. This feature is available in all regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the User Guide.

🆕 AWS adds light mode support in Amazon SageMaker Unified Studio for IAM-based domains, letting users choose between dark and light themes for improved readability and accessibility. Available in all regions, settings persist across browsers and devices.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker Unified Studio adds light mode support for IAM-based domains Today, AWS announces light mode support in https://aws.amazon.com/sagemaker/unified-studio/ for IAM-based domains. Customers can now configure the visual interface mode to match their preference, choosing between dark and light themes. Light mode helps improve readability in bright environments and provides a familiar visual experience for customers who prefer lighter interfaces. Combined with the existing dark mode, this update gives you full control over your development environment's appearance, improving accessibility and reducing eye strain across varying lighting conditions. In SageMaker Unified Studio settings, you can click on 'customize appearance' under your Profile settings to choose between visual modes including dark and light. The setting persists across browsers and devices. This feature is available in all regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/navigating-sagemaker-unified-studio.html#display-mode

Amazon SageMaker Unified Studio adds light mode support for IAM-based domains

Today, AWS announces light mode support in https://aws.amazon.com/sagemaker/unified-studio/ for IAM-based domains. Customers can now configure the visual interface mode to match their preferenc...

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs Amazon SageMaker Unified Studio now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between Amazon SageMaker Catalog and each partner platform, giving teams a consistent view of their data and AI assets regardless of which tool they use day to day. Organizations can maintain aligned glossary terms, asset descriptions, and ownership information across platforms without manual reconciliation. All three integrations synchronize key metadata elements including projects, assets, descriptions, glossary terms, and their hierarchies. With the Collibra integration, you can synchronize metadata in both directions between SageMaker Catalog and the partner platform, so updates you make in one are reflected in the other. Also, you can manage SageMaker Unified Studio data access requests from Collibra. With the Atlan and Alation integration, you can ingest metadata from SageMaker Catalog into Alation with additional enhancements coming soon. You set up these integrations by setting up a connection to SageMaker Unified Studio from within Atlan and Alation, while the Collibra integration is available as an open-source solution on GitHub. To learn more, visit the Amazon SageMaker Unified Studio documentation. For implementation details, see the Atlan blog post, Collibra blog post , and Alation blog post.

🆕 Amazon SageMaker Unified Studio syncs metadata with Atlan, Collibra, and Alation for consistent data and AI asset views. Key elements like projects and glossary terms sync, with Collibra offering bidirectional sync and data access requests. Integrate via Atlan, Alation, or…

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for Visual ETL, notebook, and code-based data processing jobs. With AWS Glue 5.1 in Amazon SageMaker Unified Studio, data engineers and data scientists can run jobs on Apache Spark 3.5.6 with Python 3.11 and Scala 2.12.18, and use updated open table format libraries including Apache Iceberg 1.10.0, Apache Hudi 1.0.2, and Delta Lake 3.3.2. You can use AWS Glue 5.1 in Amazon SageMaker Unified Studio when creating data processing jobs by selecting Glue 5.1 from the version dropdown in job settings. This applies to Visual ETL jobs, notebook jobs, and code-based jobs, so you can take advantage of the latest Spark runtime and open table format libraries across all your data processing workflows. AWS Glue 5.1 in Amazon SageMaker Unified Studio is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Stockholm), Europe (Frankfurt), Europe (Spain), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Malaysia), Asia Pacific (Thailand), Asia Pacific (Mumbai), and South America (Sao Paulo). To learn more, visit the Amazon SageMaker Unified Studio documentation. For details on what's included in AWS Glue 5.1, including updated open table format support and access control capabilities, see the AWS Glue documentation.

🆕 Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs, enabling Visual ETL, notebooks, and code-based jobs with Spark 3.5.6 and updated libraries like Apache Iceberg, Hudi, and Delta Lake. Available in multiple regions.

#AWS #AmazonSagemaker #AwsGlue

0 0 0 0
Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs https://aws.amazon.com/sagemaker/unified-studio/now supports https://aws.amazon.com/about-aws/whats-new/2025/11/aws-glue-5-1/for Visual ETL, notebook, and code-based data processing jobs. With AWS Glue 5.1 in Amazon SageMaker Unified Studio, data engineers and data scientists can run jobs on Apache Spark 3.5.6 with Python 3.11 and Scala 2.12.18, and use updated open table format libraries including Apache Iceberg 1.10.0, Apache Hudi 1.0.2, and Delta Lake 3.3.2. You can use AWS Glue 5.1 in Amazon SageMaker Unified Studio when creating data processing jobs by selecting Glue 5.1 from the version dropdown in job settings. This applies to Visual ETL jobs, notebook jobs, and code-based jobs, so you can take advantage of the latest Spark runtime and open table format libraries across all your data processing workflows. AWS Glue 5.1 in Amazon SageMaker Unified Studio is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Stockholm), Europe (Frankfurt), Europe (Spain), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Malaysia), Asia Pacific (Thailand), Asia Pacific (Mumbai), and South America (Sao Paulo). To learn more, visit the Amazon SageMaker Unified Studio https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/smus-creating-jobs.html. For details on what's included in AWS Glue 5.1, including updated open table format support and access control capabilities, see the AWS Glue https://docs.aws.amazon.com/glue/latest/dg/release-notes.html.

Amazon SageMaker Unified Studio now supports AWS Glue 5.1 for data processing jobs

https://aws.amazon.com/sagemaker/unified-studio/now supports aws.amazon.com/about-aws/whats-new/2025... Visual ETL, notebook, and code-based data processi...

#AWS #AmazonSagemaker #AwsGlue

0 0 0 0
Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs https://aws.amazon.com/sagemaker/unified-studio/ now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between https://aws.amazon.com/sagemaker/catalog/ and each partner platform, giving teams a consistent view of their data and AI assets regardless of which tool they use day to day. Organizations can maintain aligned glossary terms, asset descriptions, and ownership information across platforms without manual reconciliation. All three integrations synchronize key metadata elements including projects, assets, descriptions, glossary terms, and their hierarchies. With the Collibra integration, you can synchronize metadata in both directions between SageMaker Catalog and the partner platform, so updates you make in one are reflected in the other. Also, you can manage SageMaker Unified Studio data access requests from Collibra. With the Atlan and Alation integration, you can ingest metadata from SageMaker Catalog into Alation with additional enhancements coming soon. You set up these integrations by setting up a connection to SageMaker Unified Studio from within Atlan and Alation, while the Collibra integration is available as an open-source solution on GitHub. To learn more, visit the Amazon SageMaker Unified Studio https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/third-party-catalog-integrations.html. For implementation details, see the https://aws.amazon.com/blogs/big-data/unifying-governance-and-metadata-across-amazon-sagemaker-unified-studio-and-atlan/, https://aws.amazon.com/blogs/big-data/unifying-metadata-governance-across-amazon-sagemaker-and-collibra/, and https://aws.amazon.com/blogs/big-data/build-a-trusted-foundation-for-data-and-ai-using-alation-and-amazon-sagemaker-unified-studio/.

Amazon SageMaker Unified Studio adds metadata sync with third-party catalogs

https://aws.amazon.com/sagemaker/unified-studio/ now supports metadata and context sync across Atlan, Collibra, and Alation. These integrations synchronize catalog metadata between https://aws.a

#AWS #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Kiro to SageMaker Unified Studio using the AWS toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. SageMaker Unified Studio, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Kiro setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. Since Kiro is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration. This feature is available in all Regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the SageMaker user guide.

🆕 AWS now connects Kiro IDE to Amazon SageMaker Unified Studio, letting data scientists use Kiro's tools with SageMaker's compute, all in one place for smooth analytics and AI/ML workflows.

#AWS #AmazonMachineLearning #AmazonSagemaker

0 0 0 0
Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, and developers to leverage their Kiro setup - including its spec-driven development, conversational coding, and automated feature generation capabilities - while accessing the scalable compute resources of Amazon SageMaker. By connecting Kiro to SageMaker Unified Studio using the AWS toolkit extension, you can eliminate context switching between your local IDE and cloud infrastructure, maintaining your existing agentic development workflows within a single environment for all your AWS analytics and AI/ML services. https://aws.amazon.com/sagemaker/unified-studio/, part of the next generation of Amazon SageMaker, offers a broad set of fully managed cloud interactive development environments (IDE), including JupyterLab and Code Editor based on Code-OSS (Open-Source Software). Starting today, you can also use your customized local Kiro setup - complete with specs, steering files, and hooks - while accessing your compute resources and data on Amazon SageMaker. Since Kiro is built on Code-OSS, authentication is secure via IAM through the AWS Toolkit extension, giving you access to all your SageMaker Unified Studio domains and projects. This integration provides a convenient path from your local AI-powered development environment to scalable infrastructure for running workloads across data processing, SQL analytics services like Amazon EMR, AWS Glue, and Amazon Athena, and ML workflows - all with enterprise-grade security including customer-managed encryption keys and AWS IAM integration. This feature is available in all Regions where Amazon SageMaker Unified Studio is available. To learn more, refer to the https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/local-ide-support.html

Amazon SageMaker Unified Studio launches support for remote connection from Kiro IDE

Today, AWS announces the ability to remotely connect from Kiro IDE to Amazon SageMaker Unified Studio. This new capability allows data scientists, ML engineers, an...

#AWS #AmazonMachineLearning #AmazonSagemaker

0 0 0 0
Preview
Amazon SageMaker HyperPod now supports API-driven Slurm configuration Amazon SageMaker HyperPod now supports API-driven Slurm configuration, enabling you to define Slurm topology and shared filesystem configurations directly in the cluster create and update APIs or through the AWS Console. SageMaker HyperPod helps you provision resilient clusters for running machine learning (ML) workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). With this new API-driven configuration, you can now specify Slurm node types including Controller, Login, and Compute for cluster instance groups; instance group to partition mappings; and FSx for Lustre and FSx for OpenZFS filesystem mounts per instance group directly in the cluster API definition or through the advanced configuration section in the AWS Console. When you modify partition-node mappings directly in Slurm's native configuration files to fine-tune cluster resource assignments, Slurm's partition-node configurations can drift from HyperPod's view. A new cluster-level SlurmConfigStrategy helps you manage drift with three options: Managed, Overwrite, and Merge. The Managed strategy allows you to manage instance group to partition mappings completely via the API or Console, and automatically detects drift in partition-to-node mappings during scale-up or scale-down operations. When drift is detected, cluster updates are paused until you resolve it by switching to the Overwrite strategy to force API-defined mappings, the Merge strategy to preserve manual customizations, or by directly updating Slurm configurations to align with HyperPod. API-driven Slurm configuration is available in all AWS Regions where SageMaker HyperPod is available. To get started, you can use the AWS Management Console, AWS CLI, AWS CloudFormation, or AWS SDKs. For more information, see the Amazon SageMaker HyperPod documentation for creating clusters using the Console or the CLI, and the API reference for CreateCluster and UpdateCluster.

🆕 Amazon SageMaker HyperPod now supports API-driven Slurm setup, enabling direct cluster topology and shared filesystem configuration via cluster create/update APIs or AWS Console, managing Slurm partition-node mappings and drift. Available globally.

#AWS #AmazonSagemaker

0 0 0 0
Amazon SageMaker HyperPod now supports API-driven Slurm configuration Amazon SageMaker HyperPod now supports API-driven Slurm configuration, enabling you to define Slurm topology and shared filesystem configurations directly in the cluster create and update APIs or through the AWS Console. SageMaker HyperPod helps you provision resilient clusters for running machine learning (ML) workloads and developing state-of-the-art models such as large language models (LLMs), diffusion models, and foundation models (FMs). With this new API-driven configuration, you can now specify Slurm node types including Controller, Login, and Compute for cluster instance groups; instance group to partition mappings; and FSx for Lustre and FSx for OpenZFS filesystem mounts per instance group directly in the cluster API definition or through the advanced configuration section in the AWS Console. When you modify partition-node mappings directly in Slurm's native configuration files to fine-tune cluster resource assignments, Slurm's partition-node configurations can drift from HyperPod's view. A new cluster-level SlurmConfigStrategy helps you manage drift with three options: Managed, Overwrite, and Merge. The Managed strategy allows you to manage instance group to partition mappings completely via the API or Console, and automatically detects drift in partition-to-node mappings during scale-up or scale-down operations. When drift is detected, cluster updates are paused until you resolve it by switching to the Overwrite strategy to force API-defined mappings, the Merge strategy to preserve manual customizations, or by directly updating Slurm configurations to align with HyperPod. API-driven Slurm configuration is available in all AWS Regions where SageMaker HyperPod is available. To get started, you can use the AWS Management Console, AWS CLI, AWS CloudFormation, or AWS SDKs. For more information, see the Amazon SageMaker HyperPod documentation for creating clusters using the https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-hyperpod-quickstart.html or the https://docs.aws.amazon.com/sagemaker/latest/dg/smcluster-getting-started-slurm-cli.html, and the API reference for https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateCluster.html and https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateCluster.html.

Amazon SageMaker HyperPod now supports API-driven Slurm configuration

Amazon SageMaker HyperPod now supports API-driven Slurm configuration, enabling you to define Slurm topology and shared filesystem configurations directly in the cluster create and update APIs or throu...

#AWS #AmazonSagemaker

0 0 0 0
AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026) Last week, my team met many developers at Developer Week in San Jose. My colleague, Vinicius Senger delivered a great keynote about renascent software—a new way of building and evolving applications where humans and AI collaborate as co-developers using Kiro. Other colleagues spoke about building and deploying production-ready AI agents. Everyone stayed to ask and […]

AWS Weekly Roundup: Claude Sonnet 4.6 in Amazon Bedrock, Kiro in GovCloud Regions, new Agent Plugins, and more (February 23, 2026)

Last week, my team met many developers at Developer Week in ...

#AWS #AmazonAurora #AmazonBedrock #AmazonEc2 #AmazonNova #AmazonSagemaker #Launch #News #WeekInReview

0 0 0 0

🚀 AWS launches Amazon SageMaker Inference for custom Nova models
• Supports Nova Micro, Lite & 2 Lite with reasoning
• Deploy on EC2 G5, G6, P5 with auto-scaling
#AWS #AmazonSageMaker
aws.amazon.com/blogs/aws/announcing-ama...

0 0 0 0

🚀 AWS launches Amazon SageMaker custom Nova models
• Deploy Nova 2 Lite models with auto-scaling
• Configure instance types, concurrency, and security
#AWS #AmazonSageMaker
aws.amazon.com/blogs/aws/announcing-ama...

0 0 0 0
Preview
AWS Launches SageMaker Inference for Custom Nova Models AWS has launched SageMaker Inference for custom Nova models, completing a full fine-tuning-to-deployment pipeline for Nova Micro, Nova Lite, and Nova 2 Lite.

winbuzzer.com/2026/02/17/a...

AWS Launches SageMaker Inference for Custom Nova Models

#AI #Amazon #AmazonWebServicesAWS ##CloudComputing #EnterpriseAI #AgenticAI #FoundationModels #AIInference #NovaMicro #NovaLite #AmazonNova #AmazonSagemaker #Nova2Lite

0 0 0 0
Cartesia Sonic 3 text-to-speech model is now available on Amazon SageMaker JumpStart Cartesia’s Sonic 3 model is now available in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. Sonic 3 is Cartesia's latest state space model (SSM) for streaming text-to-speech (TTS), delivering high naturalness, accurate transcript following, and industry-leading latency with fine-grained control over volume, speed, and emotion. Sonic 3 supports 42 languages and provides advanced controllability through API parameters and SSML tags for volume, speed, and emotion adjustments. The model includes natural laughter support, stable voices optimized for voice agents, and emotive voices for expressive characters. With sub-100ms latency, Sonic 3 enables real-time conversational AI that captures human speech nuances including emotions and tonal shifts. With SageMaker JumpStart, customers can deploy Sonic 3 with just a few clicks to address their voice AI use cases. To get started with this model, navigate to the SageMaker JumpStart model catalog in the SageMaker Studio or use the SageMaker Python SDK to deploy the model to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html.

Cartesia Sonic 3 text-to-speech model is now available on Amazon SageMaker JumpStart

Cartesia’s Sonic 3 model is now available in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. Sonic ...

#AWS #AmazonSagemakerJumpstart #Aiml #AmazonSagemaker

0 1 0 0
Build Production-Ready Drug Discovery and Robotics Pipelines with NVIDIA NIMs on SageMaker JumpStart Amazon SageMaker JumpStart now enables one-click deployment of four NVIDIA NIMs models purpose-built for biosciences and physical AI: ProteinMPNN, Nemotron-3.5B-Instruct, MSA Search NIM, and Cosmos Reason. NVIDIA NIM™ provides prebuilt, optimized inference microservices for rapidly deploying the latest AI models on any NVIDIA-accelerated infrastructure. These models bring advanced capabilities spanning protein design, reasoning with configurable outputs, and physical world understanding, enabling customers to accelerate biosciences research, drug discovery, and embodied AI applications on AWS infrastructure. ProteinMPNN enables fast and efficient protein sequence optimization guided by structural data. This NIM generates high-quality sequences with enhanced binding affinity and stability, validated through experimental results. Designed for scalability and flexibility, ProteinMPNN integrates seamlessly into protein engineering workflows, transforming applications like enzyme design and therapeutic development. MSA Search NIM supports GPU-accelerated Multiple Sequence Alignment (MSA) of a query amino acid sequence against a set of protein sequence databases. These databases are searched for similar sequences to the query and then the collection of sequences are aligned to establish similar regions even when the proteins have different lengths and motifs. Nemotron-3.5B-Instruct delivers high reasoning performance, native tool calling support, and extended context processing with 256k token context window. This model employs an efficient hybrid Mixture-of-Experts (MoE) architecture to ensure higher throughput than its predecessors for agentic and coding workloads, while maintaining the reasoning depth of a larger model. It is ideal for building multi-agent workflows, developer productivity tools, processes automation, and for scientific and mathematical reasoning analysis, amongst others. Cosmos Reason is an open , customizable, reasoning vision language model (VLM) for physical AI and robotics. It enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding, and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html.

Build Production-Ready Drug Discovery and Robotics Pipelines with NVIDIA NIMs on SageMaker JumpStart

Amazon SageMaker JumpStart now enables one-click deployment of four NVIDIA NIMs models purpose-built for biosciences and physical AI: Prot...

#AWS #AmazonSagemakerJumpstart #Aiml #AmazonSagemaker

0 0 0 0
Preview
Build Production-Ready Drug Discovery and Robotics Pipelines with NVIDIA NIMs on SageMaker JumpStart Amazon SageMaker JumpStart now enables one-click deployment of four NVIDIA NIMs models purpose-built for biosciences and physical AI: ProteinMPNN, Nemotron-3.5B-Instruct, MSA Search NIM, and Cosmos Reason. NVIDIA NIM™ provides prebuilt, optimized inference microservices for rapidly deploying the latest AI models on any NVIDIA-accelerated infrastructure. These models bring advanced capabilities spanning protein design, reasoning with configurable outputs, and physical world understanding, enabling customers to accelerate biosciences research, drug discovery, and embodied AI applications on AWS infrastructure. ProteinMPNN enables fast and efficient protein sequence optimization guided by structural data. This NIM generates high-quality sequences with enhanced binding affinity and stability, validated through experimental results. Designed for scalability and flexibility, ProteinMPNN integrates seamlessly into protein engineering workflows, transforming applications like enzyme design and therapeutic development. MSA Search NIM supports GPU-accelerated Multiple Sequence Alignment (MSA) of a query amino acid sequence against a set of protein sequence databases. These databases are searched for similar sequences to the query and then the collection of sequences are aligned to establish similar regions even when the proteins have different lengths and motifs. Nemotron-3.5B-Instruct delivers high reasoning performance, native tool calling support, and extended context processing with 256k token context window. This model employs an efficient hybrid Mixture-of-Experts (MoE) architecture to ensure higher throughput than its predecessors for agentic and coding workloads, while maintaining the reasoning depth of a larger model. It is ideal for building multi-agent workflows, developer productivity tools, processes automation, and for scientific and mathematical reasoning analysis, amongst others. Cosmos Reason is an open , customizable, reasoning vision language model (VLM) for physical AI and robotics. It enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding, and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.

🆕 Amazon SageMaker JumpStart provides four NVIDIA NIMs models for biosciences and AI: ProteinMPNN, Nemotron-3.5B-Instruct, MSA Search NIM, and Cosmos Reason, for quick AI deployment in drug discovery, protein design, and robotics on AWS with o…

#AWS #AmazonSagemakerJumpstart #Aiml #AmazonSagemaker

0 0 0 0
DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning document intelligence, multilingual coding, advanced multimodal reasoning, and vision-language understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure. These models address different enterprise AI challenges with specialized capabilities: DeepSeek OCR explores visual-text compression for document processing. It can extract structured information from forms, invoices, diagrams, and complex documents with dense text layouts. MiniMax M2.1 is optimized for coding, tool use, instruction following, and long-horizon planning. It automates multilingual software development and executes complex, multi-step office workflows, empowering developers to build autonomous applications. Qwen3-VL-8B-Instruct delivers ssuperior text understanding and generation, deeper visual perception and reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the https://docs.aws.amazon.com/sagemaker/latest/dg/studio-jumpstart.html. 

DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart

Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portf...

#AWS #AmazonSagemaker #AmazonSagemakerJumpstart

0 0 0 0