Advertisement · 728 × 90
#
Hashtag
#HostingJournalist
Advertisement · 728 × 90
Preview
Hitachi Invests $1B in U.S. Grid Infrastructure to Power AI Data Centers Hitachi Energy, a subsidiary of Hitachi, has announced more than $1 billion in new U.S. manufacturing investments to expand the nation’s electrical grid infrastructure and meet soaring energy demand from AI-driven data centers. The plan, revealed on September 4, includes $457 million dedicated to building a new power transformer factory in South Boston, Virginia - the largest such facility in the United States. The broader investment will expand existing Hitachi Energy operations nationwide and create thousands of jobs, bolstering domestic supply chains for critical grid equipment. The announcement comes as the rapid growth of AI data centers intensifies strain on power networks. Large-scale AI training and inference require massive and stable energy inputs, which in turn depend on high-voltage transmission systems and large power transformers. By scaling up U.S. transformer production, Hitachi Energy aims to address a looming bottleneck that threatens to slow the buildout of data centers powering AI and cloud computing services. Transformers are essential for managing the flow of electricity across high-voltage transmission lines, converting power for industrial applications, and ensuring stable energy delivery to hyperscale data centers. The new Virginia facility will add significant capacity to U.S. manufacturing of these components, which are currently in short supply due to rising global demand and constrained supply chains. More than 825 jobs are expected to be created at the site, ranging from advanced engineering and operations roles to skilled manufacturing positions. The South Boston site will operate alongside Hitachi’s existing Virginia campus, anchoring what is projected to become the largest power transformer manufacturing operation in the country. Company executives emphasized that the investment is a strategic response to the dual challenges of expanding AI infrastructure and strengthening U.S. energy resilience. “Power transformers are a linchpin technology for a robust and reliable electric grid,” said Andreas Schierenbeck, CEO of Hitachi Energy. “As demand for AI and cloud capacity accelerates, ensuring domestic production of these critical systems is essential to strengthening supply chains and reducing bottlenecks.” Southside Virginia Political leaders at both state and federal levels hailed the announcement as a transformative boost to U.S. energy independence and competitiveness in AI infrastructure. Virginia Governor Glenn Youngkin noted the direct economic impact on Southside Virginia, where more than 800 new jobs will be created, while Senators Mark Warner and Tim Kaine highlighted the facility’s role in reinforcing American energy security. U.S. officials tied the investment directly to the expansion of AI capacity, framing stable and scalable grid infrastructure as a prerequisite for leadership in the global AI race. Hitachi executives also underscored the company’s global strategy. The Virginia project forms part of a $9 billion worldwide investment program aimed at expanding manufacturing capacity, R&D, and partnerships to deliver more resilient energy systems. By leveraging its expertise in operational technology, IT, and advanced electrification products, Hitachi Energy seeks to position itself as a central player in enabling the next generation of AI data centers. Beyond its role in AI infrastructure, the investment will support broader grid modernization efforts in the U.S., ensuring more reliable energy delivery for manufacturing and other energy-intensive sectors. Sustainability will also play a role, with the new facilities designed to incorporate energy-efficient technologies and reduce environmental impact. As AI adoption drives unprecedented growth in data processing and storage needs, the capacity to deliver secure, reliable, and scalable power will be critical. Hitachi Energy’s billion-dollar investment in U.S. transformer manufacturing reflects both the urgency of this challenge and the strategic importance of aligning grid infrastructure with the rapid expansion of AI data centers. For policymakers, energy executives, and data center operators alike, the initiative marks a pivotal step in strengthening the foundations of America’s digital economy. Related * Singtel, Hitachi to Develop Data Centers and GPU Cloud Across Asia Pacific  Singtel and Hitachi have partnered to develop next-generation data centers and GPU Cloud services in Japan and Asia Pacific, focusing on AI adoption and sustainability. The strategic alliance integrates Singtel’s expertise in connectivity and Hitachi’s data center solutions, aiming to accelerate enterprise digital transformation and address AI deployment complexities. * Hitachi Vantara and Cisco Jointly Launch New Hybrid Cloud Services  Hitachi Vantara collaborates with Cisco to launch Hitachi EverFlex with Cisco Powered Hybrid Cloud, a suite of hybrid cloud solutions that utilize automation and predictive analytics for infrastructure management, enabling scalable pay-per-use consumption while reducing total cost of ownership for enterprise clients. * Hitachi Vantara Unveils New VSP 5000 Series and E-Series Hybrid Cloud Products  Hitachi Vantara introduces new versions of VSP 5000 Series and E-Series hybrid cloud storage solutions designed for organizations of all sizes, enhancing support for cloud-native and AI application needs, improving agility, performance, and cost efficiency.

#HostingJournalist #AI

0 0 0 0
Preview
ODATA Raises $1.02B Green Financing for Sustainable Data Centers ODATA, a subsidiary of Aligned Data Centers, has secured US $1.02 billion in green financing dedicated to sustainable data center infrastructure projects across Latin America. The new capital brings ODATA’s total financing to US $2.25 billion, marking the largest sustainable financing package ever issued for the region’s data center sector. The funding strengthens the company’s ability to meet the surging demand for cloud and AI infrastructure while maintaining strict commitments to environmental responsibility. The financing round drew support from a group of international financial institutions, including Apterra, BNP Paribas, Crédit Agricole CIB, Deutsche Bank, MUFG Bank, Natixis Corporate and Investment Banking, Nomura, Société Générale, and SMBC. The funds will be directed to projects adhering to rigorous sustainability standards, emphasizing renewable energy use, energy efficiency, and eco-conscious construction practices. Rafael Bomeny, CFO of ODATA, described the deal as a landmark achievement that underscores the company’s vision for sustainable digital growth. “This green financing strengthens our financial structure and positions us to support our clients’ digital infrastructure expansion throughout the region,” he said. “By focusing on sustainability, we’re not only advancing cutting-edge technologies but also fostering a more productive and environmentally responsible future for our communities.” Brazil, Mexico, Chile, Colombia The company’s growth strategy includes expanding its footprint in key Latin American markets such as Brazil, Mexico, Chile, and Colombia. These countries are seeing rising demand for robust IT infrastructure capable of supporting cloud services and artificial intelligence workloads. With its expanded financial resources, ODATA aims to address this demand while ensuring that its projects align with environmental goals. Sustainability has long been central to ODATA’s operating model. The company operates the first hyperscale data center in Latin America powered entirely by self-produced renewable energy in Brazil. It also incorporates innovative design approaches to maximize energy efficiency and minimize water usage. Beyond renewable integration, ODATA has introduced Delta³, a proprietary air-cooling technology developed by Aligned Data Centers. This system supports up to 50kW per rack and is designed to integrate with advanced liquid cooling systems, enabling data centers to handle high-density workloads more sustainably. The announcement underscores the accelerating convergence of sustainability and digital infrastructure investment in Latin America. With demand for AI and cloud computing on the rise, ODATA’s financing milestone not only strengthens its leadership in the market but also sets a new benchmark for environmentally aligned data center development in the region.

#HostingJournalist #DataCenter

0 0 0 0
Preview
Telehouse Thailand, NT Partner to Expand Submarine Cable Connectivity Telehouse Thailand has announced a strategic partnership with National Telecom Public Company Limited (NT) to strengthen Thailand’s international connectivity through submarine cable systems. The collaboration links NT’s extensive cable network directly to Telehouse Bangkok, the country’s flagship carrier-neutral interconnection facility, now fully operational. The move allows Telehouse Thailand to provide domestic content providers and regional ISPs seamless access to two major international systems: the Asia Direct Cable (ADC) and the Asia America Gateway (AAG). The ADC route connects China, Hong Kong, Japan, the Philippines, Singapore, Thailand, and Vietnam, where many of the world’s largest cloud and content companies host infrastructure. The AAG extends this connectivity from Asia across the Pacific to the United States, creating a bridge between some of the world’s largest digital economies. NT’s domestic submarine cable system complements this setup by offering alternative routing through Thailand’s Gulf coast to international cable landing stations in Songkhla and Satun. This design enhances resilience, ensuring reliable access across multiple regions for both public and private sector organizations. Beyond enterprise benefits, the partnership supports Thailand’s national strategy to position itself as an ASEAN Digital Hub. Enhanced connectivity is expected to accelerate digital transformation across Southeast Asia, attracting foreign investment while enabling businesses to capitalize on the growth of artificial intelligence and cloud services. Telecommunications Readiness in the AI Era Colonel Sanpachai Huvanandana, President of NT, emphasized the significance of the development: “This collaboration expands Thailand’s business potential and telecommunications readiness in the AI era. With terabit-scale capacity and high-reliability design, our infrastructure addresses the demands of global cloud and content providers evaluating Thailand for data center investment.” Ken Miyashita, Managing Director of Telehouse Thailand, highlighted the operational advantages: “Leveraging NT’s submarine cable network enables our customers to manage the surge in data from generative AI and cloud workloads. Combined with Telehouse Bangkok’s four diverse fiber routes, this ensures high service availability for enterprises and service providers alike.” The partnership signals Thailand’s determination to build robust digital infrastructure that can compete globally while serving as a gateway for the wider Southeast Asian region.

#HostingJournalist #Telecom

0 0 0 0
Preview
Greener Data: Actionable Insights from Industry Leaders Greener Data: Actionable Insights from Industry Leaders In a world increasingly driven by data, understanding the environmental impact of digital infrastructure has become crucial for sustainable progress. Greener Data offers valuable insights from industry leaders about how volume-driven data demands intersect with greener technologies and practices. By exploring these actionable strategies, readers gain a comprehensive understanding of current trends and effective ways to balance data growth with environmental stewardship. This knowledge can empower individuals and organizations to make informed decisions that contribute to a more sustainable digital future. TL;DR * Highly accessible insights suitable for a broad audience * Concise yet thorough coverage of sustainability trends in data infrastructure * In-depth perspectives from top industry experts on greener technology * Affordable at $14.99 for actionable and relevant content * Engaging mix of real-world examples and future-focused strategies Greener Data They dive into a wealth of actionable ideas and real-world examples from industry leaders across the globe, covering everything from reducing carbon emissions to leveraging new hardware and software innovations. Whether someone is just starting to consider sustainability in their data practices or already working in digital infrastructure, this book breaks down complex topics into approachable insights. It’s perfect for those curious about how industry players are tackling environmental challenges with data. On any given day, it provides fresh perspectives on combining people, technology, and resources to make data greener; on special occasions, it could spark great conversations about the future of sustainable tech. Pros and Cons Pros: * ✅ Provides actionable insights from a diverse group of global leaders * ✅ Covers a broad range of topics from emissions to investment * ✅ Well structured with real-world examples and tools * ✅ Helps connect data with sustainability in meaningful ways Cons: * ❌ Some sections may feel dense for readers without background knowledge * ❌ A few technical terms might require additional research What People Say Readers consistently point out how the book balances technical detail with approachable language, making it useful whether someone is deeply involved in the industry or just starting to explore greener data practices. ?️ nosaj After browsing through the chapters, they appreciated the opportunities for efficient planning and future development. The book clearly outlines best pathways to be good stewards of the environment while supporting the world’s growing hunger for data and access. ?️ PHIL It’s a quick yet solid read that discusses current sustainability trends in the digital infrastructure industry. It gave them a clearer understanding of where the industry is headed and practical ways to support a greener future. Innovation Spotlight The book showcases fresh approaches to combining data management with sustainability goals, focusing on hardware and software innovations, as well as inclusive resource management that together create a blueprint for greener digital infrastructure. Key Benefits * Offers practical strategies to reduce carbon footprint in data centers * Features insights from 24 industry leaders worldwide * Breaks down complex sustainability topics into easy-to-understand ideas * Highlights innovation in both hardware and software to drive greener data * Encourages inclusive approaches involving resource and people management Current Price: $8.25 - $14.99 Rating: 4.7 (total: 30+) VIEW ON AMAZON FAQ What makes Greener Data: Actionable Insights from Industry Leaders a valuable resource for understanding sustainable data practices? Greener Data: Actionable Insights from Industry Leaders provides a unique blend of expert insights and practical examples from top industry leaders, making it an essential read for anyone interested in sustainable data management. It balances technical understanding with accessible language, ensuring that readers grasp the key challenges and innovative solutions shaping greener digital infrastructures. The book highlights how the growing volume of global data demands new strategies for energy efficiency and environmental stewardship, making it a timely and actionable guide. How should a potential buyer assess if this book aligns with their needs or interests? Potential buyers should consider if they seek a clear and engaging overview of sustainable data infrastructure rather than a highly technical manual. The book is well-suited for professionals, decision-makers, and anyone curious about the ecological impact of data technologies. Since this is the first of two volumes, they may also want to plan for the follow-up to gain comprehensive coverage. At a price of $14.99 USD, it offers valuable insights without the commitment of expensive technical texts, making it accessible for both individuals and organizations aiming to implement greener data strategies. What practical steps can readers take after reading to foster greener data practices in their organizations? After reading, readers can start by evaluating their data centers’ energy consumption and exploring renewable power options similar to those described in the book, such as nuclear-powered data centers and energy consumption labeling. They should also promote awareness about the environmental importance of data management among colleagues and leadership. Applying ideas from the book, such as planning for data volume growth with sustainability in mind and leveraging industry best practices, can lead to measurable reductions in carbon footprint while supporting scalable data access. Wrapping Up Greener Data provides a valuable roadmap for those interested in the intersection of data volume and environmental responsibility. Through the distilled wisdom of industry leaders, it highlights practical steps toward greener digital infrastructure. The book’s balanced approach ensures readers obtain both technical insights and accessible explanations, positioning them to better understand and contribute to sustainable data solutions in an ever-expanding digital world.

#HostingJournalist #Book

0 0 0 0
Preview
Data Center Power Equipment - 2020 edition Understanding the right power equipment is essential for anyone involved in managing or designing data centers, as it ensures continuous operation, reduces the risk of downtime, and supports scalability. By exploring this guide, he, she, or they will gain valuable insights into the types of power solutions available, their functionalities, and how to choose the best equipment to meet specific needs. TL;DR * Comprehensive coverage of essential power equipment for data centers * Provides detailed, easy-to-follow guidance suitable for both newcomers and experienced professionals * Cost-effective recommendations to optimize power management and reduce expenses * Focuses on reliability and scalability critical to web hosting infrastructures * Well-organized with practical specifications and real-world applications Data Center Power Equipment Guide This guide dives deep into managing and optimizing power equipment in data centers, which is critical for smooth hosting operations. It walks readers through practical questions like heat recovery and infrastructure hosting services, making it easier to tackle everyday challenges. Whether someone is just starting or looking to refine their strategies, the guide breaks down complex tasks into manageable steps and provides checklists to keep things on track. It's not just for experts; it’s designed to help anyone responsible for data center power feel more confident and prepared. Pros and Cons Pros: * ✅ Comprehensive coverage of data center power topics * ✅ User-friendly self-assessment tools * ✅ Regular updates keep information current Cons: * ❌ Might feel overwhelming for absolute beginners * ❌ Focused mainly on power equipment, less on other hosting aspects What People Say Users appreciate how the guide breaks down complex setups into clear, actionable parts, especially useful for managing hosting infrastructure efficiently. ?️ TechPlanner42 The detailed self-assessment process really helped clarify the strategic and tactical options when handling data center power equipment. It’s been great for ensuring all hosting infrastructure tasks are covered thoroughly. Longevity The information and tools provided are designed to be evergreen, with lifetime updates ensuring it stays useful as hosting technologies evolve. Innovative Features The inclusion of dynamic self-assessment dashboards and lifetime updated content stands out as a smart approach to keeping readers prepared for changing data center power demands. Key Benefits * Clear checklists for managing power equipment tasks * Guides strategic planning with hosting infrastructure in mind * Includes lifetime updates for ongoing relevance Current Price: $89.05 Rating: 4.5 (total: 172+) VIEW ON AMAZON FAQ What Are The Key Considerations When Buying Data Center Power Equipment? He should prioritize reliability, efficiency, and scalability when selecting data center power equipment. It is essential to evaluate the equipment's compatibility with existing infrastructure and its ability to handle future load increases. They must also consider the redundancy features to minimize downtime, especially for critical hosting operations. Additionally, evaluating energy consumption impacts both operational costs and environmental footprint. The guide emphasizes that investing in high-quality equipment may have a higher upfront cost, but it ensures long-term stability and protects web hosting services from outages. How Does This Guide Help With Practical Usage And Maintenance? She finds that the guide offers detailed instructions on installation best practices and routine maintenance schedules that prolong the equipment's lifespan. It provides insight into troubleshooting common issues and alerts users to potential risks such as overheating or power surges. They learn how to optimize power distribution to increase efficiency and reduce downtime, which is vital in data center environments managing hosting services. The guide also advises on monitoring tools and metrics for proactively maintaining equipment health. What Are Common Misconceptions About Data Center Power Equipment Addressed? They often believe that cheaper equipment suffices for hosting needs, but the guide clarifies that low-quality power equipment can lead to frequent failures and significant data center downtime. It dispels the myth that power efficiency is a secondary concern by showing how energy savings translate into substantial cost reductions over time. Another important point highlighted is that power solutions must be tailored to specific data center layouts and workloads for optimal performance, rather than adopting a one-size-fits-all approach. This comprehensive understanding helps users make informed decisions that enhance both hosting reliability and operational efficiency. Wrapping Up In summary, Data Center Power Equipment A Complete Guide (2020 Edition) offers an invaluable resource for understanding the critical elements of power management within data centers. It highlights how proper power equipment selection directly impacts the reliability of web hosting services and operational continuity. The guide equips readers with practical knowledge to optimize their infrastructure, ultimately benefiting those who rely on uninterrupted data center performance.

#HostingJournalist #Book

0 0 0 0
Preview
A Detailed Look at Data Center for Beginners A Detailed Look at the book Data Center for Beginners: Perfect for Aspiring IT Professionals This article provides a comprehensive introduction to data centers, tailored specifically for those who are new to the field or aspiring IT professionals. Understanding data center design is crucial in today’s technology-driven world, as data centers form the backbone of digital infrastructure. By exploring this topic, he, she, or they can gain foundational knowledge that supports career growth and technical competence in IT environments. Data Center for Beginners This book offers a clear and friendly introduction to data center design, perfect for anyone just starting out in IT. It breaks down complex concepts into easy-to-understand sections, making it approachable whether you're studying on your own or preparing for a new role. It’s the kind of guide that feels like a helpful mentor walking you through the essentials, with practical examples that connect theory to real-world setups. Pros and Cons   Pros Cons ✓ Clear, conversational writing style ✗ Some sections might feel basic for advanced readers ✓ Helpful diagrams and examples ✗ Limited coverage of the latest cutting-edge technologies ✓ Covers a broad range of foundational topics What People Say Readers often mention how approachable the book is, especially for those without a technical background. Many find the practical insights and clear explanations helpful for building a solid foundation in data center concepts. ?️ TechLearner89 The author does a great job explaining data center components without overwhelming the reader. I appreciated the step-by-step approach, especially the sections on cooling and power management, which helped me grasp what’s really important in a data center environment. ?️ NewbieITPro As someone new to IT infrastructure, this book was a solid resource. It’s detailed enough to give me confidence but still easy to follow. The diagrams and real-life examples made it easier to visualize how data centers operate day-to-day. How Versatile Is It? It works well for beginners from various backgrounds, whether they’re students, career changers, or IT enthusiasts. While it focuses on foundational knowledge, it’s flexible enough to be a reference throughout early career stages. About the Author Ayomaya is known for making technical subjects accessible and engaging. Their experience in IT education shines through, giving readers confidence that the material is both accurate and easy to digest. Why It Stands Out * Breaks down complex topics into simple language * Includes practical examples that relate to real data centers * Great for self-study or supplementing formal training * Covers essential aspects like power, cooling, and security Current Price: $19.24 Rating: 4.5 Learn More FAQ What Should Beginners Expect to Learn From 'A Detailed Look at Data Center for Beginners'? They can expect a clear and structured introduction to the fundamentals of data center design and operation. The book breaks down complex concepts into manageable sections, making it accessible for those new to IT infrastructure. It covers essential topics such as power management, cooling systems, network architecture, and security considerations, providing a solid foundation for aspiring IT professionals. Is This Book Suitable for Someone Considering a Career in IT Infrastructure or Data Center Management? Yes, it is specifically tailored for individuals aiming to enter the IT infrastructure field. The author presents practical insights and real-world examples that help readers understand the day-to-day challenges and best practices in data center environments. It also offers guidance on industry standards and emerging trends, which can be valuable for career development and certification preparation. What Should Buyers Consider Before Purchasing This Guide, and Is It Worth the Price of $19.24? Buyers should consider their current level of knowledge and learning goals. This guide is best suited for beginners who want a comprehensive yet approachable resource. At the price of $19.24, it offers good value given its detailed explanations and practical advice. Those seeking advanced technical manuals might find it introductory, but for foundational learning, it is a worthwhile investment. Wrapping Up In summary, Data Center for Beginners offers a solid foundation for anyone looking to understand the essentials of data center design. It is an accessible and affordable guide that equips readers with practical knowledge applicable to IT careers. By engaging with this material you can build confidence and competence in a critical area of modern technology infrastructure.

#HostingJournalist #DataCenter

0 0 0 0
Preview
Eviden Inaugurates JUPITER, Europe’s Most Powerful Supercomputer Eviden, the Atos Group product brand specializing in advanced computing, cybersecurity, mission-critical systems, and vision AI, has officially inaugurated JUPITER, Europe’s most powerful supercomputer. The inauguration of JUPITER represents a milestone for Europe’s scientific and technological landscape. Already ranked as Europe’s most powerful HPC and AI system - and the fourth worldwide - the system is poised to become the first on the continent to cross the exascale threshold, capable of executing more than one quintillion calculations per second. This computing power is comparable to the combined output of ten million modern desktop PCs. The ceremony took place on September 5, 2025, at the Jülich Supercomputing Centre in Germany, and was attended by senior political leaders, including German Chancellor Friedrich Merz, North Rhine-Westphalia’s Minister-President Hendrik Wüst, Federal Research, Technology and Space Minister Dorothee Bär, and Ina Brandes, Minister of Culture and Science of North Rhine-Westphalia. The project was procured by the EuroHPC Joint Undertaking (JU) and is hosted at Jülich, one of Europe’s most established supercomputing research centers. JUPITER is built on Eviden’s modular data center concept, which allows for scalable, pre-engineered components to be integrated into a single high-performance infrastructure. The booster partition, designed and delivered by Eviden, is powered by 24,000 NVIDIA GH200 Grace Hopper Superchips connected via NVIDIA Quantum-2 InfiniBand technology, a configuration optimized for highly parallel workloads such as AI training and complex simulations. Energy efficiency has been a central design priority. Eviden incorporated its patented Direct Liquid Cooling technology, which has already positioned JUPITER’s JEDI module at the top of the June 2025 Green500 ranking, a benchmark for the most energy-efficient supercomputers globally. This focus on sustainability addresses both environmental concerns and the operational challenges of running systems at exascale capacity. AI, Climate Research, Neuroscience The scientific and industrial implications are extensive. For climate research, JUPITER will enable the ICON atmospheric model to run at resolutions that were previously unattainable, providing more accurate predictions of extreme weather and long-term climate trends. In neuroscience, the system will allow researchers to simulate neural networks at the level of individual cells using platforms like Arbor, offering new insights into memory, learning, and neurodegenerative diseases such as Alzheimer’s. Artificial intelligence is another key area of focus. JUPITER’s performance is expected to accelerate the training of large language models, including initiatives such as OpenGPT-X, a multilingual model designed with a particular focus on German. Faster training cycles will support advances in generative AI, with applications spanning scientific discovery, industrial design, and media production. For European policymakers, JUPITER is more than a technological showcase; it is a strategic asset that enhances the region’s digital sovereignty. By combining extreme-scale compute power with sovereign data management and energy efficiency, the supercomputer positions Europe to compete with the United States and Asia in the global race for AI and HPC leadership. “The scale of JUPITER is transformative,” said Eviden executives during the launch. “This system empowers researchers and industries across Europe to accelerate innovation while meeting the highest standards of sustainability and sovereignty.” The supercomputer is expected to become fully available to European researchers, public bodies, and industrial partners in the coming months, marking the beginning of a new era in scientific computing for the continent. JUPITER’s commissioning underscores Europe’s commitment to building the infrastructure required to meet the twin challenges of technological competitiveness and climate sustainability.

#HostingJournalist #AI

0 0 0 0
Preview
IDC: Industry Clouds Accelerate Transformation in Asia-Pacific A new report from IDC highlights a sharp rise in cloud adoption across the Asia-Pacific region (excluding Japan), with enterprises in banking, financial services and insurance (BFSI), manufacturing, retail, and healthcare accelerating investments to meet regulatory, operational, and technological demands. The study, titled Cloud Adoption Trends in Asia/Pacific (Excluding Japan) - Industry View, points to hybrid and private cloud strategies as increasingly popular approaches, as organizations look to manage data sovereignty, strengthen cybersecurity, and adopt more predictable cost models. The report underlines the growing momentum of industry-specific cloud platforms, which offer pre-integrated, compliance-ready capabilities and sector-focused data models. These ‘industry clouds’ are gaining traction as they promise faster time to value and better alignment with regulatory frameworks. Financial Services Clouds, Industrial Clouds In healthcare, cloud platforms with embedded compliance tools, secure data exchange, and AI-assisted diagnostics are being adopted to improve patient outcomes while meeting strict privacy mandates, stated IDC. BFSI institutions are deploying financial services clouds to enable real-time risk analytics, fraud detection, and localized reporting for regulatory compliance. Manufacturers are turning to industrial clouds to integrate IoT, digital twins, and supply chain analytics, enabling greater efficiency and predictive maintenance. Meanwhile, retailers are leveraging cloud platforms to unify customer data, driving AI-powered personalization and omnichannel inventory management. “Companies across the region are increasingly prioritizing cloud investments not just for infrastructure, but as a strategic driver of transformation, resilience, and growth,” said Shouvik Nag, senior research analyst at IDC Asia Pacific. He added that industry-specific solutions are reshaping how businesses approach cloud transformation, offering tailored stacks and compliance frameworks that support efficiency while meeting sector-specific demands. The IDC findings suggest that in Asia-Pacific, cloud adoption is no longer a question of if, but how quickly and at what scale.

#HostingJournalist #CloudHosting

0 0 0 0
Preview
Pelagos to Build £1.8B 250MW Data Center in Gibraltar Pelagos Data Centres has announced plans to build a large-scale digital infrastructure project in Gibraltar, unveiling a 250-megawatt data center development that is set to reshape the territory’s economic and technological landscape. The facility, which will be constructed in five phases over the next decade, represents a £1.8 billion ($2,43B) private investment and the single largest development project in Gibraltar’s modern history. The announcement was made at a launch event hosted by the Chief Minister of Gibraltar, Fabian Picardo, alongside Pelagos executives and government officials. Scheduled to begin operations in late 2027, the project will continue in rolling phases through to 2033, ultimately creating one of the most powerful and energy-efficient data centers in Europe. Positioned near the Port of Gibraltar on a 20,000 square meter site, the new facility is designed not only to meet the accelerating global demand for high-performance digital infrastructure but also to act as a regional hub for AI-driven innovation. The development comes at a time when artificial intelligence is driving a surge in requirements for compute-intensive capacity, creating opportunities for smaller jurisdictions to position themselves as critical nodes in Europe’s digital ecosystem. Konstantin Sokolov, Chairman of Pelagos Data Centres, described the project as a transformative moment. “The scale of this project marks a new chapter for Gibraltar and for Europe’s digital capabilities,” he said. “Just as electricity and the Internet transformed society in the past, AI is emerging as the defining technology of our time. With our new facility, Pelagos Data Centres is laying the foundation for the next era of AI-driven innovation, positioning Gibraltar as a strategic hub.” Chief Minister Picardo emphasized the significance of the investment for Gibraltar. “I am delighted that Pelagos Data Centres have decided that Gibraltar is the place to establish their first facility,” he said. “The whole community will benefit from their massive investment and its huge economic impact. I look forward to this project becoming a reality as soon as possible.” Tier III Standards Economic benefits are expected to ripple widely across Gibraltar. The project will create up to 500 jobs during its construction phase and around 100 permanent roles once fully operational. Pelagos already employs 50 full-time staff in London and Gibraltar, with plans to expand its local workforce significantly. Beyond direct employment, the development is likely to stimulate the territory’s digital economy, driving demand for local services and strengthening Gibraltar’s profile as a hub for international business and investment. Technically, the new data center will be built to Tier III standards as defined by the Uptime Institute, ensuring high levels of resilience and availability. It will operate as a carrier-neutral facility, serving public and private sector clients, and will seek international certifications covering security, environmental sustainability, energy management, and quality standards. The design targets a Power Usage Effectiveness (PUE) ratio of around 1.2, placing it among the most efficient facilities in Europe. Sustainability has been built into the project from its inception. The data center will be powered independently of Gibraltar’s grid, using a combination of renewable energy and liquefied natural gas (LNG) from the outset, with a stated aim of reaching net-zero operational emissions by 2030. Advanced cooling systems are being developed to minimise water usage, while the company is exploring options for reusing or redistributing excess heat generated by the data centre for community projects. Sir Joe Bossano, Gibraltar’s Minister for Economic Development and Inward Investment, placed the project within a historical context. “This is the most significant infrastructure investment in Gibraltar since the early 1990s, when we introduced state-of-the-art telecommunications that laid the foundation for online services,” he said. “The technology of the future will be Artificial Intelligence, which requires data, processing power, and energy resources on an unprecedented scale. Our role is to ensure that this facility is delivered as quickly as possible. In this field, speed of delivery is everything.” The facility will be developed in five stages, with each phase scheduled at roughly 18-month intervals. Once complete, the 250MW campus will stand as one of Europe’s largest sovereign data centres. Beyond serving enterprise clients and public institutions, it will provide infrastructure capacity to support hyperscalers, cloud service providers, and AI developers requiring high-performance compute at scale. In addition to its industrial role, the site will incorporate a public leisure facility, which project leaders say is intended to give back to the local community. Joining Sokolov, Picardo, and Bossano at the launch were Christian J.A Ryan, President for Gibraltar Operations for Pelagos Data Centres, as well as James Levy KC and Tony Provasoli, senior partners at Hassans International Law Firm, reflecting the partnership between private investors and Gibraltar’s government institutions.

#HostingJournalist #DataCenter

0 0 0 0
Preview
RETN Partners with JPIX to Expand Remote IX and Connectivity in Japan Global network services provider RETN has announced a new strategic partnership with JPIX, one of Japan’s most established Internet Exchange Points (IXPs). The agreement designates RETN as an official reseller of JPIX services, expanding its Remote IX portfolio and improving international connectivity into Japan at a time when demand for local content and services is rising rapidly. The deal grants RETN customers direct access to JPIX’s fabric, a critical interconnection platform in Japan’s telecommunications ecosystem with strong presence across major urban hubs. By leveraging JPIX’s infrastructure, international networks gain the ability to peer with Japanese and global operators efficiently, without needing to establish their own physical infrastructure in-country. The approach would lower entry barriers for enterprises seeking to deliver digital services to Japanese users and enhances the quality of service for customers requiring reliable, low-latency access to one of Asia’s largest economies. Eurasian Backbone For JPIX, the partnership extends its reach through RETN’s vast Eurasian backbone, enabling Japanese networks to connect more seamlessly with international operators across Europe and Asia. The agreement strengthens reciprocal connectivity, ensuring that both local and global companies benefit from more direct and scalable routes for data exchange. William Manzione, Product Manager at RETN, said the collaboration marks a step forward in RETN’s efforts to build out its Remote IX services in Asia. “JPIX’s infrastructure allows us to connect a wide range of networks across Japan, giving our clients access to one of the most interconnected digital markets in the world,” he noted. Tetsuya Hamada, Senior Executive Expert at JPIX, emphasized the value of the partnership in enabling international organizations to tap into Japan’s digital economy. “Together, we will provide smoother and more efficient access to Japan, unlocking new opportunities for businesses and networks worldwide,” he said.

#HostingJournalist #Telecom

0 0 0 0
Preview
Broadpeak Expands CDNaaS with HyperPoPs in Four Countries Broadpeak has announced a significant expansion of its Content Delivery Network as a Service (CDNaaS), strengthening its global streaming capabilities with the deployment of new HyperPoPs in England, Switzerland, Greece, and Mexico. The move substantially increases the reach and capacity of Broadpeak’s Adaptive Streaming CDN (ASCDN), equipping content providers with the tools to scale video delivery for high-audience events while reducing latency, infrastructure costs, and environmental impact. The company’s CDNaaS is designed as a turnkey platform that eliminates the need for content providers to build and maintain proprietary delivery networks. With streaming demand accelerating, especially around large-scale live sports and global content premieres, Broadpeak is positioning its service as a broadcast-grade alternative to traditional public CDNs. Its HyperPoPs, each capable of more than 1 Tbps throughput, far exceed the cache capacity typically found in standard CDNs. This ensures that spikes in traffic can be handled seamlessly without compromising quality of experience. Unlike conventional approaches, Broadpeak’s HyperPoPs are deployed directly within internet service providers’ local networks. This architecture delivers content closer to end users, reducing latency and improving reliability. It also avoids unnecessary duplication of infrastructure and helps curb power consumption, contributing to broader sustainability objectives. The company emphasizes that efficiency is as critical as scale, particularly as content providers face rising costs and mounting pressure to reduce their carbon footprint. Performance and Security The platform is built on Broadpeak’s EdgePeak software, optimized for both performance and security. Beyond handling streaming surges, the system incorporates tools for real-time anti-piracy measures, protecting valuable live and on-demand content from revenue leakage. Additional features include dynamic ad insertion, personalized content delivery, player analytics, and multi-CDN strategies, as well as support for multicast ABR streaming. A global 24/7 Network Operations Center staffed by video specialists underpins the service, ensuring uninterrupted performance during mission-critical events. “Streaming leaders need performance, scale, security, and sustainability: all in one,” said Jacques Le Mancq, CEO of Broadpeak. “By deploying deeper into local networks and powering operations with our proven EdgePeak engine, we’re helping content providers handle the biggest high-traffic events while cutting infrastructure costs, complexity, and carbon emissions.” This expansion highlights the growing importance of edge-optimized, sustainable streaming solutions as content providers prepare for ever larger and more demanding audiences worldwide.

#HostingJournalist #CDNHosting

0 0 0 0
Preview
EXA Infrastructure Unveils Project Visegrád in Central Europe EXA Infrastructure has unveiled plans for Project Visegrád, a large-scale fiber backbone deployment set to reshape cross-border connectivity in Central Europe. The initiative, announced as the most significant fiber rollout in the region in 25 years, will link Poland, Czechia, Slovakia, and Hungary with EXA’s hyperscale network in Germany and Austria. The new backbone will establish high-capacity fiber routes connecting Warsaw, Poznań, Prague, Bratislava, and Budapest directly to Berlin, Frankfurt, and Vienna. As part of the project, EXA Infrastructure will also expand its metro fiber footprints in Warsaw, Prague, Bratislava, and Berlin to integrate leading carrier-neutral data centers. Central Europe has been one of Europe’s fastest-growing digital economies, yet connectivity to international markets has historically fallen behind that of Western Europe. According to EXA Infrastructure CEO Jim Fagan, Project Visegrád aims to close this gap. “With Project Visegrád we are building the resilient, scalable backbone needed to unlock the region’s full potential, while extending EXA’s reach into new growth markets across the Balkans, Turkey and beyond,” he said. Connectivity for Hyperscalers, Carriers, Enterprises The infrastructure will be designed for long-term scalability, beginning with 216-fibre Corning Ultra G.652D cable housed within HDPE ducts containing multiple microducts. This configuration would not only maximize flexibility but also allow for seamless upgrades to future technologies such as hollow-core fiber. Most of the routes will run through protected corridors alongside existing oil and gas infrastructure, including the Druzhba oil pipeline, to ensure high levels of resilience. Mr. Fagan emphasized that the new backbone will deliver a step change in both performance and reliability, setting a benchmark for optical networks in the region. “It will also provide a future-proof foundation for hyperscalers, carriers and enterprises that require the highest standards of connectivity,” he said. The first fiber routes of Project Visegrád are expected to be operational by mid-2026, with further rollouts continuing through 2027. The initiative signals EXA’s commitment to expanding its footprint in high-growth markets while delivering infrastructure tailored for the increasing demands of hyperscalers and enterprise customers.

#HostingJournalist #Telecom

0 0 0 0
Preview
Lambda Eyes 2026 IPO After $480M Raise, Following CoreWeave’s Debut Lambda, a U.S.-based cloud and AI infrastructure provider, is reportedly laying the groundwork for a public offering that could take place as early as the first half of 2026. The company has reportedly retained Morgan Stanley, J.P. Morgan, and Citi to guide the process, signaling its intent to join a growing list of AI-focused firms pursuing public listings. Founded in 2012, Lambda specializes in providing on-demand GPU clusters tailored for artificial intelligence workloads, serving hyperscalers, research labs, and enterprises.  Lambda has become an important player in the AI cloud ecosystem by giving customers access to NVIDIA-powered infrastructure. Its multi-tenant setups recently integrated NVIDIA’s Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), which improves bandwidth efficiency and reduces latency for distributed AI training workloads. Lambda’s potential IPO follows a wave of investor enthusiasm for companies positioned at the intersection of AI and cloud computing. $480M Series D Earlier this year, the company closed a $480 million Series D funding round led by Andra Capital and SGW, with participation from investors such as Andrej Karpathy, ARK Invest, G Squared, In-Q-Tel, KHK & Partners, and NVIDIA. The round brought significant capital for expanding Lambda’s cloud platform and underlined the strategic importance of AI infrastructure in a market experiencing rapid demand growth. The timing also draws parallels with CoreWeave, a direct competitor in GPU cloud services. CoreWeave went public in March 2025, and its stock has more than doubled since the IPO, underscoring investor appetite for AI infrastructure firms. Should Lambda proceed, it will likely be closely watched as a bellwether for the sector’s next phase of growth. For now, no official filing has been made, and the final timing of the listing will depend on market conditions. Still, Lambda’s move toward a possible IPO highlights how cloud infrastructure players are positioning themselves to capitalize on the surge in AI adoption and the growing need for scalable, GPU-driven computing.

#HostingJournalist #AI

0 0 0 0
Preview
DigitalBridge in Talks to Acquire Yondr in $1.61 Billion Deal DigitalBridge Group is reportedly in advanced discussions to acquire Yondr in a deal valued at approximately $1.61 billion, according to multiple industry reports. The potential acquisition has drawn considerable attention across the technology and financial sectors, reflecting DigitalBridge’s continued push to expand its global digital infrastructure footprint. Yondr, which has built its reputation on data center development and hyperscale capacity delivery, has emerged as a fast-growing player in the infrastructure market. A purchase by DigitalBridge would mark one of the largest deals in the sector this year and significantly strengthen the firm’s global capabilities. DigitalBridge, headquartered in Boca Raton, Florida, has been on an acquisition streak, including its $1.5 billion purchase of WideOpenWest earlier this year, as it seeks to consolidate assets that can serve hyperscalers, enterprises, and cloud providers with expanding capacity needs. The timing of the potential transaction follows a period of solid financial performance for DigitalBridge. In its second-quarter 2025 earnings, the company reported rising revenues and improved profitability, providing it with both the financial stability and access to capital required for large-scale deals. Analysts suggest that the acquisition of Yondr could help DigitalBridge deepen its presence in critical regions while diversifying its portfolio of digital infrastructure services. Global Digital Infrastructure Platform As with any major acquisition, regulatory approvals will be required before the transaction can move forward. The move is closely tied to the strategic vision of DigitalBridge’s leadership team, led by Chief Executive Marc Ganzi and supported by recently appointed Chief Strategy Officer Clay Gregory. The company has emphasized its commitment to growth, with plans to outline further details of its roadmap during upcoming industry and investor conferences in September 2025. If completed, the Yondr acquisition would reinforce DigitalBridge’s position as a leading global digital infrastructure platform, signaling further consolidation in a sector defined by surging demand for data center capacity, cloud services, and hyperscale deployments. For investors, the deal represents both a potential boost to DigitalBridge’s long-term growth prospects and a barometer of accelerating competition in the digital infrastructure market.

#HostingJournalist #DataCenter

0 0 0 0
Preview
Beyond.pl Launches F.I.N. Sovereign AI Factory in Poland Beyond.pl’s AI supercomputer known as the F.I.N. is now fully operational, marking the official launch of what the company calls the most powerful sovereign AI Factory in Central and Eastern Europe (CEE). The deployment establishes a regional hub for enterprises, startups, research institutions, and public sector organizations to develop, train, and scale advanced AI projects within a secure, locally governed environment. The F.I.N. supercomputer serves as the core of Beyond.pl’s AI Factory initiative, which was first unveiled in May 2025. By creating a sovereign platform for AI development, Beyond.pl seeks to accelerate adoption while ensuring that sensitive data and workloads remain under European security and compliance standards. Company executives frame the development as a milestone not just for Beyond.pl but for the broader CEE region, which has historically lagged Western Europe in access to high-performance AI infrastructure. “The achievement strengthens our position as a pioneer provider of sovereign AI infrastructure in Central and Eastern Europe,” said Wojciech Stramski, CEO of Beyond.pl. “The F.I.N. will allow AI to be developed faster, at greater depth, and with long-term value creation secured. For the region, this means access to world-class infrastructure and software that can directly fuel innovation, business growth, and societal benefit. The future is now.” NVIDIA DGX SuperPOD The system is built on NVIDIA DGX SuperPOD reference architecture and is equipped with NVIDIA B200 GPUs using Blackwell architecture. Connectivity is powered by NVIDIA Quantum-2 InfiniBand, and storage is provided by Pure Storage FlashBlade scale-out object storage. This combination would deliver the performance needed to handle workloads ranging from training large language models (LLMs) and generative AI to scientific simulations and enterprise deployments. Customers can also access NVIDIA AI Enterprise software, which provides a comprehensive platform for developing and deploying pretrained AI models with microservices, libraries, and frameworks. The commissioning of the F.I.N. represents the first stage in Beyond.pl’s broader plan to establish a large-scale sovereign AI Factory at its Poznan Data Center Campus in Poland. Technical teams from Beyond.pl, NVIDIA, and Pure Storage collaborated to bring the system online. Over time, the company intends to expand the platform’s reach to serve more markets and use cases. Security, Sustainability, Sovereignty Beyond.pl is positioning itself as the first provider in CEE to deliver a commercial, multi-layered ecosystem of AI services. These include GPU as a Service (GPUaaS), AI as a Service (AIaaS), GPU colocation, and managed services. Together with the company’s broader IT portfolio - spanning colocation, hybrid and managed cloud, managed networks, backup, and disaster recovery - customers gain a comprehensive foundation for digital transformation and AI adoption. Security, sustainability, and sovereignty are central to the project’s design. The AI Factory is hosted on Beyond.pl’s 100MW campus, which remains the only data center in the European Union certified simultaneously at ANSI/TIA-942 Rated 4 and EN 50600 Class 4, the highest levels of resilience and security. The facility runs entirely on renewable energy, maintains a power usage effectiveness (PUE) ratio of 1.2, and holds ISO 27001 and ISO 14001 certifications. By combining advanced infrastructure with regional compliance guarantees, Beyond.pl aims to make the F.I.N. not just a technological achievement but also a competitive advantage for Central and Eastern Europe. For businesses and institutions in the region, it offers an opportunity to participate in the global AI race without compromising data sovereignty or security.

#HostingJournalist #AI

0 0 0 0
Preview
Neterra Reports Growing Power of DDoS Attacks in 2025 Global telecom operator and IT services provider Neterra has reported a sharp rise in the power and sophistication of Distributed Denial of Service (DDoS) attacks during the first half of 2025. The company revealed that it successfully blocked 77,765 attacks between January and June, on track to surpass the 276,993 incidents it mitigated across the whole of 2024. The trend underscores how cybercriminals are refining their methods and escalating attack intensity. While ACK Flood attacks were the most common form of assault last year, 2025 has seen SYN Short attacks emerge as the dominant technique. These newer attacks leverage smaller data packets and are designed to bypass defenses that monitor traffic volumes alone. As a result, they pose challenges even for established mitigation systems. Industry analysts point out that the scale of packet-per-second peaks illustrates the changing dynamics. Neterra observed traffic surges nearly four times stronger than those seen in 2024, with individual attacks peaking at over 38 million packets per second and bandwidth volumes reaching 30 Gbps. Such numbers highlight the growing need for adaptive defenses that evolve alongside attacker strategies. Advanced Security Technology Investments The company compared the effect of a DDoS strike to a restaurant being overwhelmed by fake customers filling every table, leaving no space for legitimate patrons. In practice, the result is service disruption and potential business loss for targeted organizations. Neterra’s Chief Technology Officer, Pavel Marchev, emphasized the importance of continuous investment in advanced security measures. “Our new hardware systems successfully neutralized some of the most powerful attacks we have seen. We continue to invest in advanced security technologies to ensure peace of mind for our customers despite the surge in cyber threats,” he said. The latest figures highlight how DDoS defense has become a critical element of business continuity planning as attacks grow not only in volume but also in sophistication.

#HostingJournalist #Cybersecurity

0 0 0 0
Preview
IQM Lands $320M in Historic Series B, Expands U.S. Quantum Push IQM Quantum Computers, a Finnish-headquartered developer of superconducting quantum systems, has secured a record-breaking $320 million (€275 million) in its Series B funding round, marking the largest raise in the quantum sector to date outside of the United States. The round brings the company’s total capital raised to $600 million and underscores the accelerating pace of investment as quantum technology edges closer to commercial deployment. The latest financing was led by Ten Eleven Ventures, a U.S.-based cybersecurity investment firm, marking IQM’s first American investor. Ten Eleven was joined by an expanded commitment from Tesi, the Finnish state-owned venture capital and private equity company that has backed IQM since its inception. Additional participants included both new and returning investors such as Elo Mutual Pension Insurance, Varma Mutual Pension Insurance, the Schwarz Group, Winbond Electronics Corporation, and sovereign wealth funds EIC and Bayern Kapital. IQM plans to use the fresh capital to expand its international footprint, particularly in the United States, while continuing to reinforce its leadership in Europe. The company will channel funds into scaling its assembly lines, data center infrastructure, and chip fabrication capabilities in Finland, supporting its ambition to achieve fault-tolerant quantum computing. A central milestone is the development of systems that can scale from thousands to millions of qubits, paired with robust error reduction and correction technologies. Dr. Jan Goetz, Co-Founder and Co-CEO of IQM, described the round as pivotal for the company’s next growth phase. “This funding round will fuel our company growth, with an accelerated tech roadmap towards error corrected systems from thousand to million qubits. We also focus on strong business expansion in the U.S. and other global markets based on our attractive on-premises offerings for quantum computers and the recently announced upgrade of our cloud offering,” said Mr. Goetz. He highlighted Ten Eleven’s participation as a critical step for IQM’s entry into the U.S., citing the investor’s experience in scaling companies to market leadership as a major factor in the partnership. IQM’s Next Stage of Growth Journey Ten Eleven’s Co-Founder and Managing General Partner Alex Doll emphasized the alignment between cybersecurity and quantum computing as a driver for the firm’s decision to lead the round. “Cybersecurity and quantum share an evolving relationship characterized by common stakeholder communities. This overlap will enable us to provide high-value counsel, capital, and connections to the IQM team,” Doll noted. As part of the investment, Doll will join IQM’s board of directors. Tesi also signaled confidence in IQM’s trajectory, with Juha Lehtola, Director of Venture and Growth Investments, pointing to the company’s progress in technology, production, and customer delivery. “We are happy that Tesi’s revised investment strategy allowed us to significantly increase our investment to support IQM’s next stage of growth journey,” said Juha Lehtola. IQM’s roadmap includes strengthening its global commercial presence while deepening investments in fabrication and R&D. By advancing capabilities in chip production, the company aims to position itself at the forefront of quantum error correction and scalable system design, crucial steps toward achieving practical quantum computing. The raise also underscores the growing maturity of the European quantum ecosystem, with IQM emerging as a flagship example of how regional firms are seeking to balance global expansion with homegrown innovation. As interest from U.S. investors like Ten Eleven increases, IQM is positioned to bridge European expertise with North American market opportunities. Goldman Sachs International acted as the sole placement agent for the transaction, reflecting the high-profile nature of the round and the level of investor interest in quantum’s near-term commercial potential. With this raise, IQM consolidates its role as a global leader in full-stack quantum computing, navigating the transition from research-led development toward scalable, commercially viable systems. The size and diversity of the investor syndicate point to rising confidence that quantum technology, once largely experimental, is now moving firmly into the realm of enterprise-ready infrastructure.

#HostingJournalist #QuantumComputing

0 0 0 0
Preview
NexQloud Files Six Patents to Advance Global Blockchain Cloud OS NexQloud Technologies has taken another step toward its vision of a global “cloud operating system” by filing six new patents with the United States Patent and Trademark Office. The applications expand on the company’s previously announced Decentralized Kubernetes Service and are designed to bring orchestration, compliance, and monetization capabilities to AI compute, virtual machines, and multi-cloud environments. Together, the filings form the foundation of what NexQloud calls its Distributed Compute Platform, a framework intended to unify disparate computational resources under blockchain-based governance. The company’s core thesis is that the current model of cloud computing is fragmented, costly, and poorly aligned with the increasing demands for compliance and sustainability. Enterprises often struggle to balance data sovereignty requirements, environmental reporting mandates, and the efficient use of distributed compute resources. NexQloud’s patents aim to address these systemic challenges by using blockchain enforcement, automated orchestration, and a unified control plane to govern workloads across decentralized devices, enterprise data centers, and traditional cloud providers. According to Mauro Terrinoni, CEO of NexQloud, the latest filings move the company closer to its vision of a true operating system for global compute. “With our DKS patent as the kernel, this portfolio completes the architecture for a true Cloud OS,” he said. “We are not merely extending a product; we are defining a new compute layer for the internet. Our Cloud OS provides the missing blueprint to unify disparate resources into a coherent, efficient, and compliant whole, transforming how the world deploys and manages computational power.” The six new applications cover several distinct but interconnected innovations. A decentralized AI compute system is designed to manage distributed GPU resources for AI and machine learning, combining scheduling with blockchain-based monetization. A distributed compute cloud extends orchestration to virtual machine workloads across decentralized infrastructure while embedding compliance verification in the ledger itself. Another system, the distributed cloud aggregator, applies rule-based logic and machine learning to direct workloads across multiple infrastructure types depending on real-time cost, performance, and data sovereignty requirements. Blockchain-enabled Marketplace NexQloud is also proposing a decentralized cloud exchange, a blockchain-enabled marketplace where enterprises can register and monetize excess data center capacity under strict compliance controls. At the governance layer, the company has filed for a Delegated Proof of Stake blockchain protocol featuring NFT-based resource licensing, performance-weighted consensus, and sustainability tracking. These efforts are underpinned by enabling technologies such as a node health scoring system that evaluates compute nodes on performance, reliability, and sustainability, blockchain-enforced geo-compliance that maps workloads to approved locations, and a multi-cloud orchestration control plane enhanced with AI. The combined effect of these systems is to give enterprises more granular control over where and how workloads are deployed, with the promise of lowering costs and ensuring compliance across jurisdictions. By embedding environmental, social, and governance metrics directly into the orchestration and consensus mechanisms, NexQloud also seeks to align enterprise computing strategies with sustainability reporting obligations. Industry analysts note that while decentralized approaches to cloud remain experimental, the intellectual property activity around platforms like NexQloud signals growing interest in alternatives to centralized hyperscaler models. If successful, the company’s Distributed Compute Platform could provide enterprises with a practical way to leverage underutilized compute capacity worldwide while maintaining confidence in security, governance, and compliance. NexQloud has not disclosed when products based on these patents will be commercially available, but the filings indicate a comprehensive strategy to position itself at the intersection of cloud orchestration, blockchain governance, and AI infrastructure.

#HostingJournalist #AI

0 0 0 0
Preview
Ardoq Launches Enterprise AI Management to Tackle Governance Gaps Ardoq, a software as a service provider that is redefining enterprise architecture, has announced the release of its Enterprise AI Management Solution, which is intended to assist businesses in gaining control, visibility, and compliance over their use of AI. The issue for executives is to close blind spots by getting insight about shadow AI usage, data flows, compliance requirements, and the influence on business outcomes as AI becomes more and more integrated into every part of the organization. The urgency is highlighted by impending deadlines from proposed US legislation like the Algorithmic Accountability Act and frameworks like the EU AI Act and the Colorado AI Act. Boards are expected to show that AI systems are ethical, transparent, and compliant, but many firms do not have the resources to provide that kind of evidence. "AI is bringing about both genuine uncertainties and hitherto unheard-of opportunities. The majority of businesses don't know what data AI affects or where it lives in their operations. In order to link AI use to strategy, compliance, and risk, they want integrated, enterprise-wide intelligence. "Ardoq provides leaders with the visibility to act and the confidence to move quickly and responsibly by integrating governance into the Enterprise Architecture knowledge graph," stated Erik Bakstad, CEO and co-founder of Ardoq. Governance in Context: Linking AI to Value, Risk, and Strategy Ardoq provides governance in context by relating the use of AI to risk, strategy, and quantifiable commercial value. Incorporating governance into the Enterprise Architecture knowledge graph gives leaders a comprehensive understanding of AI's applications, interactions with data and systems, and alignment with transformation goals and objectives and business capabilities. They are able to confidently and clearly respond to board-level queries as a result. There are four pillars that would provide impact:  * AI Visibility: Locate AI agents and systems, including shadow AI, throughout the company. Map the applications, owners, and data that AI depends on in order to identify blind spots before they become hazards. * AI Compliance Readiness: Keep an eye on changing AI rules and guidelines in addition to internal security and governance guidelines. Organizations may show regulators, auditors, stakeholders, and their own boards that AI is being used responsibly by centralizing controls, audit trails, and reporting readiness. * Aligning strategically: By linking AI use to strategy, capabilities, KPIs, and results, executives can prioritize investments, show value creation, and remove high-risk or low-value use cases. * Future‑Proofing Governance: Governance that is future-proof is scalable and vendor-neutral, adapting to changing models and ecosystems. Lock-in-free support for industry-specific tools, proprietary LLMs, and generative AI. In order to eliminate blind spots and match AI with strategy, customers agree that connected governance is essential.  When it works together on purpose, everything works better," stated Henrik Magnusson, SmartestEnergy's Head of Architecture. "To achieve our deep green agenda, Ardoq connects data and complicated systems. Organizations must be able to integrate AI into the larger enterprise framework in order to develop ethically.  The Governance Gap: AI Uptake Has Exceeded Monitoring Ardoq was one of the first platforms for enterprise architecture to investigate the governance gap that was developing around AI. In 2024, the use of AI skyrocketed, yet many organizations were still in the dark.  Late 2024 studies showed that:  * Over 50% of workers were use unapproved AI products, which put security and compliance at risk. * Even though 95% of CEOs reported AI-related incidents, just 2% of businesses adhered to responsible AI norms. * Even though AI has been implemented in 93% of organizations, only 7-8% had governance structures in place. * Over 90% said that they were unprepared for the obligation to comply with AI. After one year, adoption has only gotten faster, but governance hasn't kept up. Because of this discrepancy, organizations require more than just AI discovery. Enterprise-wide control is required. Enterprise AI Management System from Ardoq Organizations can now properly find, govern, and scale AI thanks to Ardoq's innovative solution: * Sort and categorize AI agents and systems according to their applications and business purposes. By giving businesses a single source of information regarding AI usage, whether it is being implemented formally or is manifested as shadow AI, this helps to eliminate blind spots, which frequently pose the most risk. * Visualize the processing, sharing, and possible exposure of information by mapping data flows and dependencies. Leaders are better able to determine whether sensitive data is being managed effectively and identify instances where incomplete or redundant data practices could compromise compliance. * Verify adherence to changing frameworks, including internal company governance, security, and privacy rules; these include proposed federal legislation, new US state regulations, and worldwide AI standards. Organizations may show regulators, auditors, stakeholders, and their own boards that they are using AI responsibly by centrally documenting controls, audit trails, and reporting readiness. This lowers the danger of expensive fines or damage to their brand. * Assist enterprises in measuring not only risk but also value creation by connecting AI systems to business outcomes. By linking the use of AI to strategic objectives and KPIs, leaders can demonstrate to the board and regulators that AI is being utilized responsibly. * Use a vendor-neutral strategy to future-proof adoption and make sure that oversight keeps up with the development of AI ecosystems and models. Because of this flexibility, businesses are not restricted to a single ecosystem and can use generative AI, proprietary LLMs, or industry-specific tools.  Integrated with Enterprise Architecture for Complete AI Governance Ardoq integrates AI supervision into the overall company environment through its Enterprise Architecture knowledge graph, which sets it apart from vendor-specific offerings or stand-alone governance systems. This method is thorough and practical since it links governance to strategy, people, processes, and technology. For instance, it assists leaders in comprehending the use of AI chatbots in customer service, the data they have access to, the classification of that data, the laws governing it, and the controls being put in place to ensure compliance. A 360-degree view of artificial intelligence in the workplace can be obtained by examining how a financial model interacts with risk management procedures.  Successful AI adoption cannot be handled separately. Though they might keep track of compliance checklists or tool inventories, traditional governance systems are unable to demonstrate how AI relates to people, company capabilities, or long-term strategy. Rather than being a force that permeates the entire organization, this limited perspective runs the risk of perceiving AI as a collection of unrelated projects.  Ardoq's strategy is distinct. Through the integration of AI governance into its Enterprise Architecture knowledge graph, Ardoq gives executives the ability to: * Examine how AI projects fit into transformation objectives and business capabilities * Recognize how fundamental systems, AI models, and the data that drives them are interdependent * With assurance, respond to board-level inquiries such as "Where are we exposed?" and "What value is AI delivering?" Another notable accomplishment of Ardoq is their first-to-market advancements in AI for Enterprise Architecture. When Ardoq released its MCP Server, it became the first EA platform to allow direct, secure inquiries from AI helpers like Microsoft Copilot and Claude, according to Ardoq. This open environment avoids vendor lock-in, gives customers options, and guarantees that AI outputs are contextually grounded. In contrast to vendors who approach AI as an add-on chatbot, Ardoq's AI would be a part of the model. Every output is traceable back to its source, workspace permissions are respected, and its logic is explained. Additionally, customers would gain from AI-powered accelerators that speed up analysis while keeping governance at its center, like capability mapping, process modeling, and viewpoint generation. "The foundation of Ardoq's approach is in enterprise architecture," stated Dr. Jason Baragry, Ardoq's chief enterprise architect. "We do more than just highlight AI's presence. We disclose the ways in which it affects the processes, people, and skills that propel transformation. Leaders need that knowledge to govern effectively and create value. Product Availability, Webinar On September 18, as part of the launch of the Ardoq IlluminAIte Webinar Series, Ardoq will live present their Enterprise AI Management Solution, which will be accessible to both new and existing customers. Since artificial intelligence (AI) is putting a lot of emphasis on how businesses scale, align, and control their technology choices, IlluminAIte is made to help executives understand how AI fits into their operations, how it relates to risk and strategy, and how enterprise architecture can transform oversight into opportunity.

#HostingJournalist #AI

0 0 0 0
Preview
SAP Expands Sovereign Cloud to Boost AI Innovation and Compliance SAP has significantly expanded its strategy for digital sovereignty and AI innovation, unveiling a broadened Sovereign Cloud portfolio that is designed to give enterprises, governments, and regulated industries new ways to balance compliance, innovation, and operational control. The company is positioning the expansion as both a European priority and a global initiative, reflecting growing demand for secure and sovereign cloud environments. The updated portfolio introduces additional deployment options, including SAP Cloud Infrastructure, SAP Sovereign Cloud On-Site, and country-specific services such as Delos Cloud in Germany. Each of these SAP offerings is intended to provide customers with full-stack sovereignty - spanning data, operational, technical, and legal dimensions - while still enabling access to SAP’s broader innovation ecosystem, from the Business Technology Platform to embedded AI capabilities. Customers are able to choose how and where their cloud is deployed, tailoring solutions to specific regulatory environments and security profiles without sacrificing scalability or speed. SAP Cloud Infrastructure represents the foundation of the model within the EU, developed with open-source technologies and operated within SAP’s European data centers to ensure compliance with regional data protection laws. Delos Cloud, meanwhile, supports sovereign cloud requirements in the German public sector. The centerpiece of the expansion is SAP Sovereign Cloud On-Site, a globally available option that allows SAP to manage and operate its cloud infrastructure directly inside a customer’s own facility or a chosen data center. This model gives organizations the highest degree of physical control and data residency while maintaining full compatibility with SAP’s architecture and innovation roadmap. Thomas Saueressig, Member of the Executive Board of SAP SE for Customer Services & Delivery, described the expansion as critical to Europe’s future role in AI and digital transformation, stressing that sovereignty must underpin the region’s ability to apply AI to specialized industry use cases. Martin Merz, President of SAP Sovereign Cloud, characterized sovereignty as the key to Europe’s digital resilience, noting that scalable and future-ready frameworks are increasingly essential as organizations modernize. Deloitte Partner Stephen Glynn added that sovereign cloud solutions are rapidly shifting from optional to essential, particularly in public-sector and regulated industries. Deployment Models SAP emphasizes that the Sovereign Cloud initiative is not tied to a single deployment model but instead designed as a spectrum of options, ranging from SAP-hosted services to customer-owned sites and even hyperscaler-based models where appropriate. This flexible framework reflects customer demand for greater control over sensitive data and infrastructure, particularly as regulatory scrutiny intensifies and digital transformation accelerates. The company has outlined four core capabilities at the center of its sovereign cloud approach: data sovereignty, which ensures customer ownership and regulatory compliance; operational sovereignty, which provides transparency and oversight through SAP-managed environments; technical sovereignty, which gives customers the freedom to run workloads on the infrastructure that best fits their requirements; and legal sovereignty, which ensures alignment with regional legal frameworks and accountability standards. In practical terms, these capabilities allow organizations to run critical workloads such as the SAP Business Suite in sovereign environments, while still benefitting from continuous innovation cycles. This includes integration with SAP Business Technology Platform and SAP Business AI, ensuring that sovereignty does not come at the expense of speed or depth of innovation. By anchoring sovereignty into the core of its cloud strategy, SAP aims to provide the compliance assurance that regulated industries require while also supporting the rapid deployment of next-generation AI applications. SAP has pledged more than €20 billion in long-term investment to strengthen Europe’s digital autonomy, underscoring the company’s determination to support regional resilience with secure, regulation-compliant solutions. The expansion of the Sovereign Cloud portfolio is already being rolled out across multiple countries, supported by hundreds of localized delivery experts and a wide array of certifications to meet regional standards. The global availability of the On-Site model signals that SAP is extending these sovereignty principles beyond Europe, aiming to serve international markets where regulatory and operational needs also demand localized control. For SAP, the expansion reflects a broader shift in how cloud services are delivered and consumed. Customers increasingly expect not only technological innovation but also assurance that their sovereignty requirements can be met without compromise. By embedding sovereignty into infrastructure, operations, and legal frameworks, SAP is offering customers both flexibility and assurance in an era defined by AI-driven transformation, regulatory complexity, and the pressing need for digital resilience.

#HostingJournalist #ManagedHosting

0 0 0 0
Preview
Arelion Expands Scandinavian Fiber to Meet AI Demand Arelion is investing heavily in its Scandinavian fiber backbone as the region emerges as one of Europe’s most competitive hubs for artificial intelligence and data center infrastructure. The company, which operates one of the largest global Internet backbones, confirmed plans to deploy new high-fiber count cables in its existing ducts between Stockholm, Oslo and Copenhagen. The  move intended to address the sharp rise in demand from hyperscalers and enterprises building AI-driven workloads. The project, scheduled for completion in 2026, aims to secure long-term fiber availability, enhance network resilience, and provide direct connectivity between Scandinavia and key global markets across Europe and North America. According to Arelion, the strategy reflects not only immediate AI-related demand but also long-term growth projections for digital infrastructure in the Nordic countries. Scandinavia has increasingly attracted hyperscale data center investment due to its availability of land, reliable and sustainable power sources, and comparatively stable energy pricing. Oslo’s data center sector alone already provides 423 megawatts of capacity, and analysts forecast Nordic capacity will increase by 280 to 580 megawatts annually. Projections suggest that the region’s data center construction market could reach $7.38 billion by 2030, growing at a compound annual rate of 23.47 percent. At the same time, the regional AI market is expanding even faster, with a projected CAGR of 26.24 percent to nearly $20 billion by 2031. Balancing Performance with Sustainability For Arelion, the fiber upgrade strengthens its mesh of terrestrial routes across Scandinavia, which currently connects to 13 subsea cables serving the Nordics and Baltics. The new installation will not only interconnect hyperscale data centers with newly built last-mile infrastructure but also leverage existing duct capacity laid decades earlier. Arelion, then operating as Telia International Carrier, originally constructed its Scandinavian network 25 years ago with multiple ducts along major Nordic routes, anticipating the sort of future demand now materializing. Chief executive Daniel Kurgan described the investment as part of a long-term approach that balances performance with sustainability. Installing new cables within existing ducts, he said, significantly reduces environmental impact compared to building an entirely new duct system. He emphasized that this expansion represents the first stage of a broader multi-year program designed to maximize existing assets, scale capacity, and increase diversity across the network. By upgrading its infrastructure, Arelion is reinforcing its ability to provide customers with access to what independent rankings list as the number one global Internet backbone. Its service portfolio includes IP Transit, Wavelengths, Dedicated Internet Access, Cloud Connect, Global Ethernet Virtual Circuit, and DDoS Mitigation. These offerings are expected to benefit from enhanced scalability and resilience as Scandinavian enterprises, content providers, and service providers accelerate adoption of AI and other emerging digital applications.

#HostingJournalist #Telecom

0 0 0 0
Preview
Alibaba Boosts Chip Design to Cut Reliance on U.S. Tech Alibaba is stepping up its efforts to reduce reliance on U.S. technology with the development of a new artificial intelligence processor, a move intended to help plug the gap left by restrictions on American chip exports to China. The initiative highlights how Chinese firms are racing to build domestic alternatives amid intensifying competition in the global AI sector. Industry sources report that Alibaba has produced a processor more versatile than its earlier generation of AI chips. As China’s largest cloud computing provider, the company has long been a major client of NVIDIA, the U.S. leader in AI semiconductors. However, Washington’s export controls have disrupted the flow of NVIDIA’s most advanced products into China. The H20, designed as a China-specific alternative after a 2023 ban on the H100 and Blackwell series, has become the most powerful U.S. AI chip legally available in the country. Even the H20, though, has faced complications. Earlier this year the Trump administration blocked its sale before later granting approval. Soon after, Beijing advised leading firms, including Alibaba and ByteDance, not to purchase the chip, citing security risks that NVIDIA denies exist. The resulting uncertainty has left Chinese companies with limited options and created space for domestic innovation. Cloud Growth Drives Alibaba’s Chip Strategy Analysts caution that China remains years away from matching the performance of the most sophisticated American chips, in large part because of restrictions on access to advanced semiconductor manufacturing tools. Still, Alibaba’s latest design signals how domestic players are working to reduce exposure to supply chain turbulence while supporting national objectives for technological self-sufficiency. The Chinese government has poured resources into this strategy, investing heavily in local semiconductor capabilities and positioning AI as a priority sector. For Alibaba, strengthening its chip lineup dovetails with its core cloud computing business, which it sees as both a growth engine and a testing ground for its silicon. Despite geopolitical headwinds, Alibaba’s cloud division delivered a 26 percent revenue increase in the April–June quarter, outperforming market expectations on surging demand for AI services. Those results underscore the company’s dual role as a driver of domestic AI adoption and a participant in Beijing’s broader strategy to develop a resilient, homegrown technology ecosystem.

#HostingJournalist #AI

0 0 0 0
Preview
DARE1 Cable Expansion to Link Kenya and South Africa by 2028 Djibouti Telecom has unveiled plans to extend its subsea footprint further down the African coastline, with construction of a new cable route between Kenya and South Africa scheduled to begin next year and targeted for completion by 2028. The project will see the Djibouti Africa Regional Express 1 (DARE1) cable system stretched from Mombasa to Mtunzini in South Africa, a distance of roughly 3,200 to 3,500 kilometers, with additional landings in Tanzania, Mozambique, and Madagascar. The initiative reflects Djibouti’s ambition to evolve from a Red Sea gateway into a pan-African hub for digital traffic. By broadening DARE1’s reach, the operator aims to address growing demand from regional carriers, hyperscalers, and enterprise clients seeking higher capacity, lower latency, and more resilient network options between East and Southern Africa. $200M Committed to 12 Cable Projects The move comes against the backdrop of several high-profile outages that highlighted the fragility of current connectivity in East Africa. In May 2024, simultaneous cuts on the EASSy and SEACOM cables disrupted services between Kenya and Tanzania, underscoring the risks of relying on a limited number of subsea routes. For industries that depend heavily on seamless connectivity, such as streaming platforms, financial technology providers, and cloud service operators, diversified pathways are viewed as a critical safeguard against downtime. DARE1 is currently backed by Telkom Kenya, Somtel International, and Hormuud Telecom Somalia. The new extension will build on this foundation and complement Djibouti’s broader strategy of heavy investment in digital infrastructure. Over the past decade, the country has committed more than $200 million to a dozen cable projects and joined regional programs such as the Eastern Africa Regional Digital Integration Project. These efforts are designed to expand affordable broadband and facilitate digital trade, particularly for landlocked neighbors like Ethiopia that rely on Djibouti for international access.

#HostingJournalist #Telecom

0 0 0 0
Preview
OpenAI Plans One-Gigawatt Data Center in India OpenAI, the company behind ChatGPT, is preparing to significantly expand its global infrastructure footprint with plans to establish a large-scale data center in India. According to Bloomberg News, the facility would have a minimum capacity of one gigawatt, making it one of the most ambitious data infrastructure projects tied to artificial intelligence in the region. The move comes as India becomes OpenAI’s second-largest market by user base and a focal point for its international expansion strategy. The company, backed by Microsoft, has formally registered as a legal entity in India and begun building a local workforce. In August, OpenAI confirmed it would open its first Indian office in New Delhi later this year, reinforcing its long-term commitment to the country. Details on the precise location and construction timeline for the data center have not been disclosed, though speculation is mounting that CEO Sam Altman could reveal more when he visits India in September. India and the Global AI Ecosystem Industry observers view the initiative as part of OpenAI’s broader Stargate program, a global AI infrastructure push announced earlier this year. The Stargate effort, unveiled in January with backing from SoftBank, Oracle, and OpenAI itself, is tied to a projected $500 billion investment in next-generation AI facilities. For India, the development signals a potential step change in its role within the global AI ecosystem. With its skilled technical workforce, expanding digital economy, and growing demand for AI-powered services, the country has emerged as a natural hub for investment. A gigawatt-scale data center would not only serve OpenAI’s operational needs but could also accelerate regional access to advanced AI capabilities. If realized, the project would highlight how competition in AI is increasingly intertwined with the race to build massive, power-intensive digital infrastructure capable of supporting next-generation workloads.

#HostingJournalist #AI

0 0 0 0
Preview
CrowdStrike to Acquire Onum, Boosting Falcon SIEM CrowdStrike has announced plans to acquire Onum, a specialist in real-time telemetry pipeline management, in a move designed to enhance its Falcon Next-Gen SIEM platform and reinforce its vision of building the operating system for cybersecurity. Financial details of the deal were not disclosed. The acquisition, once finalized, will enable CrowdStrike to integrate Onum’s pipeline and in-memory data processing technology directly into Falcon, improving speed, cost efficiency, and control for enterprise customers adopting AI-driven security operations. George Kurtz, CEO and founder of CrowdStrike, emphasized the strategic importance of data in cybersecurity. “Our Next-Gen SIEM is the engine that powers the modern SOC, and data is the fuel that makes the engine run,” he said. By streaming high-quality, filtered telemetry into Falcon, Kurtz argued, CrowdStrike will provide the foundation for autonomous detection and response at scale, empowering security teams with complete visibility and control over their data ecosystems. The Falcon Next-Gen SIEM has become central to CrowdStrike’s platform strategy, positioning itself as a hyper-scalable foundation for both cybersecurity and IT observability. Customers are increasingly turning to Falcon to tackle complex challenges such as AI-driven SOC transformation and cost containment, areas where legacy SIEMs often struggle. By removing onboarding friction, the Onum integration promises to accelerate adoption and help enterprises realize faster returns on their SOC modernization investments. Real-Time Filtering, Optimization, In-Pipeline Analysis Onum, founded by Pedro Castillo, is recognized for its proprietary stateless, in-memory architecture, which would offer speed and efficiency advantages over legacy batch processing methods. Its technology enables real-time filtering, intelligent optimization, and in-pipeline analysis, meaning detections can begin before data even enters the Falcon platform. This shift has the potential to redefine SOC operations by turning pipelines into intelligent data processors rather than passive conduits. The performance improvements are notable. According to CrowdStrike, Onum processes up to five times more events per second than competing solutions, cuts data storage costs by as much as 50% through smart filtering, and enables incident response up to 70% faster while reducing ingestion overhead by 40%. These efficiencies directly address the pain points of data migration and management, long considered bottlenecks in enterprise SOC transformations. For Onum, the deal represents an opportunity to scale its vision globally. “Onum was founded on the belief that pipelines should do more than transport data - they should transform data into real-time intelligence,” said Mr. Castillo. “By joining CrowdStrike, we can deliver this vision at unprecedented scale to accelerate SOC transformation on a global scale.” With the rise of agentic security and the increasing complexity of IT and cybersecurity operations, the ability to manage telemetry pipelines intelligently is becoming critical. By bringing Onum into its ecosystem, CrowdStrike is betting that a tightly integrated, AI-powered SIEM can outpace rivals while setting a new standard for how security data is processed and leveraged. The acquisition highlights a broader industry trend toward embedding intelligence at every stage of the data lifecycle, reducing costs while enhancing outcomes. For CrowdStrike customers, the integration could mean not only faster detection and response but also a more autonomous and resilient SOC fit for the AI era.

#HostingJournalist #Cybersecurity

1 0 0 0
Preview
TD SYNNEX and AWS Partner to Accelerate Cloud and AI Adoption TD SYNNEX has announced a new strategic collaboration agreement (SCA) with Amazon Web Services (AWS), aimed at accelerating the adoption of artificial intelligence (AI), cloud migration, modernization, and marketplace growth across North America, Latin America, and the Caribbean. The agreement underscores TD SYNNEX’s growing role in the AWS ecosystem while extending resources and investment support to small and midsize businesses (SMBs), independent software vendors (ISVs), and mid-market partners throughout the Americas. Through the new SCA, TD SYNNEX will provide partners with enhanced access to AWS services and tools, allowing them to expand their AI and cloud portfolios. The agreement also aims to make AWS Marketplace programs more accessible to ISVs, helping them monetize faster and connect with new customer segments. For many partners, particularly smaller firms with limited resources, this could lower barriers to entry and accelerate cloud and AI adoption at scale. The collaboration builds on an existing SCA between TD SYNNEX and AWS in Europe, reflecting a deepening global relationship between the two companies. TD SYNNEX already holds a range of AWS specializations and program designations, including Migration and Modernization, Cloud Operations, Education and Government, and Amazon EC2 for Windows Server Delivery. It also supports AWS Partners through its StreamOne global cloud platform, offering consumption management, AWS Marketplace procurement, and end-to-end lifecycle support. The company’s portfolio of enablement programs - including Destination AI, Cloud Labs, and its AI Accelerator Practice Builder - will be expanded under the agreement. These initiatives provide technical training, go-to-market support, and resources that help partners develop and monetize AI and cloud solutions. Reyna Thompson, President of North America at TD SYNNEX, emphasized the value of the agreement in addressing partners’ challenges. “Our partners are under increasing pressure to modernize while navigating limited financial resources, rapidly evolving AI and cloud landscapes, and complex marketplace environments,” she said. “Through our SCAs with AWS, TD SYNNEX is uniquely positioned to help partners overcome these challenges with service-led expertise and support.” AWS Innovation Meets TD SYNNEX Partner Network For Latin America and the Caribbean, the agreement is also expected to drive regional growth. “This agreement underscores the credibility of our team and confidence in our strategic direction,” said Otavio Lazarini, President of TD SYNNEX in Latin America and the Caribbean. “It will allow us to accelerate business development in the region and provide our partners with cutting-edge technology to take their operations to the next level.” The partnership has already demonstrated tangible benefits for smaller organizations. John Zemonek, Founder and CEO of Aligned Technology Group, noted how TD SYNNEX’s AWS expertise simplified engagement for smaller firms. “They helped us create a roadmap that made partnering with a global leader like AWS easier,” he said. “The continued investment through the AWS SCA will only strengthen our ability to deliver innovative solutions that drive real impact for our customers.” AWS echoed the significance of the collaboration. “Our work with TD SYNNEX brings together AWS innovation and their extensive partner network, creating new opportunities for businesses across the Americas on their cloud and AI journeys,” said Brian Bohan, Director, Consulting COE at AWS. The agreement highlights how distributors and hyperscale cloud providers are jointly shaping the pace of digital transformation across the Americas. By streamlining access to AWS resources, TD SYNNEX and AWS aim to broaden participation in AI and cloud adoption, from startups and SMBs to enterprises seeking faster modernization and stronger business outcomes.

#HostingJournalist #CloudHosting

0 0 0 0
Preview
New ABB Electrical Solutions Target AI-Driven Data Center Growth ABB has introduced a suite of new electrical infrastructure products aimed at helping data centers save labor and space while preparing for the dramatic load growth anticipated over the next three years. With global data centers expected to double or even triple their electrical demand in that timeframe, ABB’s new solutions are designed to streamline installation, reduce complexity, and support the evolving requirements of high-density digital infrastructure. The company’s latest innovations include compact Color-Keyed aluminum narrow-tongue, long-barrel, two-hole lugs, which provide a lighter and more cost-effective alternative to traditional copper lugs. By eliminating post-installation crimping through pre-terminated wire connections, these aluminum lugs reduce both space and labor requirements. Their narrow-tongue design enables larger gauge wires to be terminated in tight enclosures, while the chamfered barrel improves wire insertion and crimping efficiency. Dual-rated for both copper and aluminum conductors, a single lug can support a wide range of cable sizes. ABB, a pioneer in compression connectors with nearly 70 years of experience, continues to expand its widely adopted color-coded system trusted by electrical installers worldwide. Another key product, the T&B Liquidtight Systems cable entry plates, was designed specifically for high-density data center applications. These plates allow for the efficient entry of multiple cables into enclosures while maintaining liquid-tight ingress protection. Available in fixed and configurable types, they replace conventional cable glands to accelerate installation and improve organization. The flexible entrance membrane supports a broad range of cable sizes, accommodates both terminated and unterminated cables, and retains a reliable seal even without cables in place. Rated UL 508 and IEC IP66, the plates are engineered for durability in industrial and commercial environments, supporting the rapid pace of data center expansions and migrations. AI Growth Demands Stronger Data Center Power To further simplify installation, ABB is launching its first Ocal PVC-coated to PVC conduit adapter. Traditionally, connecting rigid metallic conduits with PVC conduits required multiple components and manual adjustments. ABB’s one-piece adapter provides a streamlined alternative that reduces labor without increasing cost. By creating a secure transition between underground PVC-coated rigid metallic conduit and above-ground PVC, the adapter delivers consistency and speed for contractors working on critical projects. Jack Bellissimo, Senior Vice President of Product Management, Marketing & Strategy for ABB Installation Products in the U.S. and Latin America, emphasized the role of these products in enabling modern infrastructure growth. “The lifeblood of a data center is its electrical infrastructure as it rapidly evolves to accommodate the growth of AI, electrification, and advanced technologies,” he said. “Our new solutions enhance interconnection and cable management strategies, helping data centers handle increasing volumes, improve versatility, and boost performance.” The launches come as part of ABB’s broader strategy to strengthen its role in data center infrastructure. The company has invested more than $100 million in recent years to expand U.S. operations, manufacturing, and sustainability initiatives, ensuring products remain close to customers while addressing surging demand. With the combination of lighter-weight lugs, scalable cable entry plates, and simplified conduit adapters, ABB aims to provide data centers with solutions that reduce installation time and operational costs while delivering the reliability required for mission-critical workloads.

#HostingJournalist #DataCenter

0 0 0 0
Preview
AI Growth to Outpace IT Hardware as Spending Plateaus Global enterprise IT budgets are being reshaped by the surge in artificial intelligence, with IDC warning that traditional hardware spending is set to plateau as AI absorbs future growth. According to the research firm’s latest Worldwide Artificial Intelligence IT Spending Market Forecast, global AI investment is expected to expand at a compound annual growth rate of 31.9% between 2025 and 2029, reaching $1.3 trillion by the end of the forecast period. That trajectory is being driven by the rise of Agentic AI - systems that can operate independently or in coordinated fleets to execute tasks, manage workflows, and build new applications. IDC’s analysis suggests these developments will increasingly determine how IT leaders prioritize budgets, with dollars flowing toward platforms that build, manage, and secure agents rather than into general-purpose compute. “Application and services providers that fail to embed AI deeply into their products risk losing share to those that do,” said Rick Villars, Group Vice President of Worldwide Research at IDC. “The alignment between investment growth and IT leaders’ trust in AI’s ability to shape business outcomes is undeniable. Those who hesitate will fall behind.” While cloud providers and hyperscalers are expected to continue building dense, compute-heavy environments to run these workloads, IDC forecasts that other parts of the IT stack will stagnate. Spending on traditional non-AI servers and storage, in particular, is projected to flatten as enterprises and service providers pursue efficiency and consolidation rather than expansion. In effect, AI will become the gravitational force that captures spending growth that once flowed into conventional hardware refreshes. Spending on AI-enabled Applications The shift underscores how AI is not simply another workload but a restructuring force across the enterprise technology landscape. IDC projects service providers will account for 80% of infrastructure spending through 2029, reflecting the scale required to support massive agentic environments. At the same time, spending on AI-enabled applications will outpace all other segments, triggering competitive realignment across the software industry. Crawford Del Prete, President of IDC, highlighted the organizational implications of this shift. “As agents become more commonplace, roles inside enterprises will evolve rapidly. Some will see productivity gains, others will become redundant. Both workers and enterprises will need to adapt with unprecedented agility,” he said. The consequences extend beyond budgets. Enterprises embracing AI are expected to adopt AI-driven network operations, anomaly detection, and self-healing capabilities to streamline IT management. These changes will accelerate digital transformation, but they also concentrate risk - placing greater importance on leadership and strategy in navigating the transition. IDC’s forecast suggests a future in which AI-driven innovation defines competitive advantage while reshaping the tech stack beneath it. Traditional IT hardware will remain, but its growth curve will flatten, overshadowed by the acceleration of agentic systems and AI-enabled applications. For enterprise CIOs, the challenge is not just whether to invest in AI, but how to reallocate budgets from legacy infrastructure toward the emerging foundation of digital business.

#HostingJournalist #IaaSHosting

0 0 0 0
Preview
Digital Realty Breaks Ground on Its First Data Center in Rome Digital Realty has broken ground on its first data center in Rome, marking a significant expansion of its PlatformDIGITAL footprint across the Mediterranean. The new facility, ROM1, is designed to become a key connectivity hub, linking Europe with Africa, the Middle East, and Asia through its integration with subsea cable systems and its role as a highly connected, carrier-neutral data center. Situated within 15 kilometers of the coast, ROM1 will initially deliver over 3MW of installed IT capacity. The site spans 22 hectares - around 2.3 million square feet - with plans for future expansion that would make it one of Italy’s largest data center campuses. Positioned at the intersection of global and regional traffic flows, ROM1 is expected to play a central role in meeting the rising demand for AI-driven workloads and low-latency interconnection across southern Europe. Rome’s strategic importance is central to Digital Realty’s expansion. As Italy’s second-largest city by GDP and the third-largest in the EU, the capital is increasingly recognized as a digital gateway to the Mediterranean. By enabling faster and more resilient connectivity, ROM1 is expected to reduce latency between northern and southern Italy, improving the country’s competitiveness in international markets. The facility will also serve enterprises, cloud providers, and carriers seeking a sustainable and scalable infrastructure option in a rapidly growing region. “Rome is not only a key economic hub in Southern Europe, but also a critical entry point to the broader Mediterranean – a region that is fast emerging as a vital gateway for global connectivity,” said Alessandro Talotta, Managing Director of Digital Realty in Italy. He emphasized that the project marks a milestone in the company’s strategy to establish interconnected hubs in growth markets, enabling customers to extend their reach across Europe and beyond. Global Sustainability Strategy ROM1 adds to Digital Realty’s Mediterranean portfolio, which already includes facilities in Athens, Marseille, and Zagreb, as well as the recently launched HER1 site in Crete. Plans are also underway for a new interconnection hub in Barcelona, further strengthening the company’s presence across Europe’s southern edge. Aligned with its global sustainability strategy, Digital Realty has committed to powering ROM1 entirely with renewable energy. The initiative reflects the company’s pledge to balance expanding capacity with reducing environmental impact, an increasingly critical factor for both hyperscale operators and enterprises prioritizing sustainable digital transformation. Scheduled for completion in 2027, ROM1 represents the first phase of a broader campus development in Rome. Once fully built out, the site is expected to underpin a new digital ecosystem in the city, reinforcing Digital Realty’s role as one of the most influential players in shaping connectivity and data infrastructure across the Mediterranean.

#HostingJournalist #Telecom

0 0 0 0
Preview
Novacore From India Unveils NVIDIA Blackwell GPU Cloud Novacore Innovations, headquartered in Mumbai, has announced the deployment of its GPU cloud platform powered by NVIDIA Blackwell servers, marking a milestone for India’s AI infrastructure and global competitiveness. Novacore has also secured ₹44.6 crore INR ($5.1 million USD) in financing led by Rashi Fincorp, with U.S. and Abu Dhabi participation. Founded in late 2024 by San Francisco-based Ranbir Badwal and Mumbai- based Aryamaan Singhania, Novacore is leveraging competitive electricity costs, India's skilled technical workforce, and Hyderabad's elite power grid and datacenter ecosystem to provide cost-effective AI infrastructure for startups, researchers, and enterprises across India, the United States, and the Middle East. The line of credit will fund the rollout of Novacore's first Hyderabad Blackwell cluster, strengthening domestic compute capacity while helping research teams and enterprises run advanced AI workloads without relying on overseas resources. Serving Key AI Markets Novacore supports three high-growth markets: India's startups, U.S. innovators, and organizations in the Middle East. Customers benefit from lower costs, quick provisioning, and scalable GPU power for generative AI, LLMs, scientific computing, and real-time analytics. To aid adoption, Novacore is offering free trials of Blackwell clusters to qualifying startups and labs in each region. NVIDIA B200 Performance Central to Novacore's platform is the NVIDIA B200 server, Blackwell's successor to the H100/H200. The B200 delivers up to 2.3× higher peak performance and double the real- world AI speed of prior hardware. With 192GB HBM3e memory and 8TB/s bandwidth, it can train trillion-parameter models at scale. Fifth-gen tensor cores and dual transformer engines accelerate training up to 3× and boost inference throughput by as much as 15×, while offering 25× greater energy efficiency to reduce costs. "From Mumbai's leadership to Hyderabad's operational excellence, we have built Novacore to combine technical depth, reliability, and reach," said Aryamaan Singhania, Co-founder. "By focusing on efficiency and talent, we are delivering unmatched value to innovators in India, the U.S., and the Middle East." "Our goal is to democratize access to the most advanced computing," said Ranbir Badwal, Co-founder. "What some call an almost datacenter bubble in India has kept hosting costs well below the U.S., where companies face bidding wars over datacenters. This lets us offer American startups and researchers the GPU power they need - so they can spend more on breakthroughs instead of over-paying for compute."

#HostingJournalist #DataCenter

0 0 0 0