Advertisement · 728 × 90

Posts by CIO.com - The voice of IT leadership

Preview
人の経験に頼った物流から、データで動く物流へ──SGHグループが挑む「データドリブン経営」の真価 ## 属人性の壁──経験と勘の時代からの脱却 物流業界は大きな課題を抱えている。 物流の現場を長年支えてきた“属人性”という壁である。宅配の現場は、ベテランドライバーの経験に依存してきた。どのルートが最も効率的なのか、どの時間帯に不在が多いのか、繁忙期にどれだけ荷物が増えるのか──こうした判断は、個々のドライバーの頭の中に蓄積された暗黙知によって支えられていた。地域特性や顧客の生活リズムが複雑に絡み合う宅配では、経験の差がそのまま生産性の差となり、サービス品質にも直結していた。しかし、人手不足が深刻化し、新人や協力会社ドライバーの比率が高まる中で、この属人性は大きなリスクとなっている。 さらに2024年問題による労働時間規制の強化は、従来の“経験頼み”の働き方を維持することを難しくしていた。現場は、これまでの延長線上では立ち行かなくなる未来をはっきりと意識し始めている。そこでSGHグループは、ベテランの経験に依存した現場を“データで支える現場”へと転換する取り組みを本格化させた。 ## AIが現場を変えた──手書き伝票のデジタル化と業務効率化の進化 SGHグループが労働人口減少という課題に取り組み始めたのは2018年ごろから。EC市場の急拡大で荷物量は増え続ける一方で、現場の人員は限られる。将来的に大きな課題になるのは明らかだったからだ。 こうした中で進められたのが業務の効率化だ。 佐川急便では当時、年間約14億個、多い時には1日約500万個の荷物を扱っていた。eコマースが進んだことにより、配達伝票の約90%の電子化は進んでいた。 こうしたデータを活用し、導入されたのが最適な配達ルートを自動表示する「スマート集配」という仕組みだ。 従来はベテランが経験をもとにルートを組み立てていたが、スマート集配では荷物情報、地図情報、過去の配達実績などをもとに、AIが最適ルートを提示する。 これにより、走行距離は平均で5〜10%削減。新人や協力会社ドライバーでも効率的に配達できるようになった。 ## データのフルデジタル化でさらなる業務効率化 しかしさらなる業務の効率化を進めていく上で大きな壁となったのが、伝票のフルデジタル化の問題だ。90%の伝票をデジタル化したとはいえ、約10%にあたる40〜45万枚/日が手書き伝票として残っていた。 この10%の伝票がある限り、業務をフォーマット化し、デジタル化を進めていくのは難しい。 フルデジタル化が進まなければ、ドライバーは当日にならないと荷物の全体量を把握できず、これが朝の業務に大きな負担としてのしかかった。 この課題を解決するため、同グループはAI‑OCRの開発に踏み切る。 しかし、手書きの複写式伝票の読み取りは想像以上に難易度が高かった。 複写式の伝票は、かすれや傷、筆圧のばらつきが非常に多く、AIで簡単に読み取ることができない。読み取り精度を高めるために調整を重ね、繰り返し学習させた。こうした作業は現在も地道に続けられているのだという。 試行錯誤を重ねた結果、数字の読み取り精度は99.9%に到達。さらに2022年4月には、手書きの住所の読み取りも含めたフルデジタル化にも成功した。これにより、翌朝4時までにすべての荷物情報がデジタル化される仕組みが整い、現場の作業は大きく変わった。 フルデジタル化された情報は、次の業務効率化へとつながっていく。 積み込み作業の効率化だ。従来、ドライバーは自分の担当エリアに合わせて荷物を探し、最適な順番で積み込む必要があった。これは各々の経験に依存する作業であり、ベテランと新人の間で大きな差が生まれていた。 そこで開発されたのが「夜積みアプリ」である。トライバーがあらかじめ担当するエリア内の住所単位での積み込みパターンを事前登録することにより、その内容に合わせた積み方をアプリが提示する仕組みだ。 このアプリ開発と導入により、積み込み時間は平均で20〜30%短縮された。営業所によっては、1人あたり15〜20分の削減が実現している。ドライバーの朝の負担は大幅に軽減され、出発時間の安定化にもつながった。 AI‑OCRによるデータ化と、積み込み・ルート最適化の仕組みは、SGHグループのDXを象徴する取り組みとなった。単なる業務改善ではなく、現場の働き方そのものを変える“構造改革”である。こうした取り組みが積み重なり、同グループのDXは次のステージへと進んでいく。 ## AIがつくったデータ基盤は、次の最適化へ──GCJ との協業が開く未来 AI‑OCRによって手書き伝票のデジタル化が進み、翌朝4時までに全荷物情報が揃う仕組みが整ったことで、SGHグループは“データが揃う物流”という新たなステージに入った。 これまで現場の経験に依存していた業務が、データを起点に再設計できるようになったのである。積み込み作業の効率化やスマート集配によるルート最適化は、その象徴的な成果だった。 しかし、同グループが目指すのは単なる効率化ではない。荷物量の変動、配達指定時間、不在データ、地理的特性──こうした複雑な要素を踏まえ、集配エリア設計そのものを最適化する“次の段階”が必要だった。特に2030年に予測される輸送力不足は深刻で、従来の延長線上では持続可能な物流を維持できない。 SGHグループはトータルロジスティックスの機能を進化させ、次世代物流システムの開発を目指し、2024年には、佐川急便がグーグル・クラウド・ジャパン(以下GCJ)とDXを活用した総合物流機能の強化に向けた戦略的パートナーシップ協定を締結した。 この協業は、「スマート集配」をさらに高度に最適化させる取り組みにもつながるものだ。 SGH経営企画部の南部一貴部長(所属・役職は取材当時)は次のように語る。 「従来のAIルート最適化は、定期配送や荷量が安定した領域では効果を発揮していましたが、宅配のように荷量が日々変動し、エリア特性が複雑に絡む領域では限界がありました」 佐川急便ではスマート集配導入にかかわらず以前より、貨物の種類、量、時間帯、顧客の要望、地理情報など、複合的なデータをもとに効率的なルート設計を行っている。これらの情報をグーグル・クラウドのデータ分析基盤と組み合わせることで、従来は人の経験に依存していたルート設計を、データに基づく科学的なプロセスへと変えていく。 この取り組みが実現すれば、配達効率はさらに向上し、ドライバーの負荷軽減にもつながる。加えて、繁忙期の急増にも柔軟に対応できる“変動に強い物流”が実現する。AI‑OCRで始まったデータ化施策は、GCJとの協業によって、より高度な最適化へと進化しつつある。 SGHグループのDXは、現場の課題解決から始まり、データ基盤の構築、そしてAIによる高度最適化へと段階的に発展している。伝票情報(紙)のフルデジタル化したことで、さまざまなことに活用できるようになったデータは、いまや同グループの物流改革の中心にあり、未来の物流を形づくる原動力となっている。 ## 成果と壁──生産性向上とグループ連携の難しさ データ基盤が整い始めると、SGHグループの現場では目に見える変化が次々と生まれた。 AI‑OCRによる手書き伝票のデジタル化で、翌朝4時には全荷物情報が揃うようになり、積み込み作業では「夜積みアプリ」が開発・導入され、ドライバーの朝の負担は大幅に軽減され、出発時間の安定化にもつながった。 配達効率は向上し、顧客体験(CX)も改善された。 データは、経営・現場・顧客のすべてに価値をもたらし始めていた。しかし、成果が広がるほど、新たな課題も浮かび上がる。それが、グループイン企業とのデータ連携である。 しかし、この壁を乗り越えた先には、新たな可能性が広がっている。国際物流のデータと国内配送のデータを統合すれば、リードタイムの短縮が実現する。倉庫データと宅配データを組み合わせれば、より精度の高い需要予測が可能になる。グループ全体のデータがつながれば、物流の価値はさらに広がる。 成果と壁が交錯する中で、SGHグループのデータドリブン経営は次のステージへ向かおうとしていた。その先にあるのは、データが描く“新しい物流の未来”である。 ## 未来への航路──データが描く新しい物流の姿 成果と課題が交錯する中で、SGHグループは次のステージへと歩みを進めていた。データドリブン経営は、現場の効率化や適正運賃収受を実現するための手段であると同時に、物流の未来そのものを形づくる“基盤”でもある。同グループが見据える未来は、単なる業務改善の延長ではない。データを活用し、物流を企業経営の戦略領域へと押し上げる世界だ。 問い合わせ対応の効率化、請求業務の自動化、集荷依頼の最適化──データを軸にした新しい業務プロセスが生まれ、物流の付加価値はさらに高まるだろう。 未来を形づくるもう一つの柱が、環境負荷の可視化である。物流は輸送手段やルート設計によってCO₂排出量が大きく変動するため、データによる可視化はGX(グリーントランスフォーメーション)を推進する上で欠かせない。 荷主企業にとっても、自社のサプライチェーン全体の環境負荷を把握することは重要であり、物流データはその基盤となる。 さらに、AIとデータを組み合わせた高度な最適化も視野に入っている。荷物量の変動、顧客の行動パターン、交通状況、天候──これらをリアルタイムで分析し、最適な配達計画を自動生成する未来は、決して遠くない。経験と勘に依存していた時代から、データとAIが支える時代へ。物流は今、確実にその転換点に立っている。 ガートナージャパン バイスプレジデント チームマネージャーの一志達也氏は、次のようにコメントしている。 「人間の蓄えた知識や経験に頼る、『非科学的』な意志決定は当然属人的なものとなり、組織が大きくなるとムラやバラツキが生じやすくなる。それが積み上がると、大きなコスト増や収益の圧迫につながってしまい、企業の競争力を損ねる要因になってしまう。データは、取り扱いの難しい面はあるが、その性質を理解して上手に使いこなせば、ビジネスを『科学的』に進められるようになり、先に挙げた問題を解決する。今はAIの時代である。AIはまさに、データに基づき科学的に振る舞うビジネス・マシンであるが、SGHグループがこの先においてAIをどう活かしていくのかも楽しみである」 データが物流を変え、物流が社会を変える──。その未来はすでに動き始めている。SGHグループの挑戦は、これからも続く。
17 hours ago 0 0 0 0
Preview
The silent failure between approval and delivery I spent years optimizing for the approval. The deck, the business case, the stakeholder pre-reads, the timing of the ask. I got good at it. What I got wrong was thinking it was the hard part. The meeting ends. The executive nods. Someone takes notes. And the initiative, fully blessed and budgeted, begins to disappear. This is not a failure most organizations recognize as failure. There is no incident report. No postmortem. The initiative does not collapse. It attenuates. It sits in a queue that keeps growing. It gets discussed in steering committees but never quite becomes someone’s primary obligation. It lives on roadmaps and slide decks for months, sometimes years, until someone finally asks whether it is actually moving. By then, the people who approved it have moved on to other priorities. The window for the original business case has shifted. And the organization has quietly absorbed the initiative into the ambient noise of everything that is important but not yet urgent. Research consistently finds that IT project failure rates remain stubbornly high, and in my experience, a significant share of those failures never show up in the failure statistics because the initiative never officially dies. It just stops moving. I have watched this happen in my own work and in the work of dozens of CIOs I have spent time with. I stopped treating the approval conversation as the moment that mattered. The thirty days that follow it are. ## What approval actually creates When an executive committee approves an initiative, it creates permission. It does not create momentum. I did not fully understand this distinction until I was on the wrong side of it. Approval is a point-in-time event. The business case gets presented, the budget line gets allocated and the executive sponsor says they are behind it. What approval does not do is change anyone’s existing obligations, shift how performance is measured or move the initiative to the top of any operating manager’s actual priority list. I ran a significant infrastructure modernization several years ago. The approval process was thorough. We had spent three months building the case, aligned every major stakeholder and walked through the ROI in two separate senior reviews. When the green light came, I felt the particular satisfaction of a hard-won approval that had finally landed. Ninety days later, almost nothing had moved. A vendor milestone slipped because no one on the business side had cleared the dependency. A department lead who had been fully aligned in the approval meeting was now routing around the project entirely, building a workaround that would take two years to untangle. We never recovered the original timeline. The business units that were meant to co-own implementation were still running on their existing plans. The middle managers responsible for change adoption were responsive in meetings and absent in execution. The steering committee met every two weeks to discuss status, and the status was always “progressing.” The gap between what the status reports said and what was actually happening was wide enough that I missed it until a downstream dependency forced the issue. What I learned from that experience, and have seen confirmed repeatedly since, is that the approval conversation is largely about the future. It is about what the organization will do, what it will become, what value it will unlock. Execution requires something the approval conversation never addresses: changing what people do today, with the time and attention they have this week, within a system that is already fully committed to its existing obligations. ## Where ownership goes The most reliable predictor of post-approval failure I have found is ambiguous ownership below the sponsor level. Executive sponsors approve initiatives and then return to their day jobs. This is expected and appropriate. The problem is that sustained delivery requires ownership at the operating level, and the approval process rarely establishes it. The sponsor is accountable in the governance sense. Nobody is accountable in the daily work sense. What I have seen, consistently, is that the senior team believes the operating layer is executing. The operating layer is waiting for clearer direction, additional resources or some confirmation that this initiative genuinely outranks the twelve other things on their list. Both groups think the other one is moving. Neither is wrong, exactly. The accountability between them has just never been established. I have seen this play out the same way enough times that I now know what to watch for. In the weeks after approval, everyone is cooperative. Meetings happen. Documents get drafted. A project manager is assigned. The initiative looks like it is in motion. It took me too long to learn to ask the question that actually reveals where things stand: who would lose something measurable if this initiative stalled for sixty days? In most cases, the honest answer is nobody. The executive sponsor’s performance review does not depend on it. The business unit leads are measured on their core operations. The project manager can report on process without being accountable for outcomes. The initiative exists in a kind of organizational float, technically active and practically stalled. When I started asking that question earlier, before the first status report was ever written, the answer changed how I structured accountability from the beginning. It also changed who I had the conversation with. The sponsor could tell me the initiative mattered. The operating manager could tell me whether it had landed anywhere in their actual week. ## The invisibility problem I have found the failure that follows a successful approval is quiet in a way that makes it easy to miss. It does not look like failure. It looks like normal organizational complexity. Initiatives in this state generate activity. There are meetings, documents, status updates. The dashboard stays green because the people filling it in are reporting on process milestones that are real, while the underlying delivery is drifting. I have sat in steering committees where every workstream was “on track” and the project was, in practical terms, already over. That drift is only visible to someone watching the relationship between reported progress and actual organizational behavior. By the time the gap becomes undeniable, the costs are significant and harder to recover from than if the stall had been caught at week eight rather than month ten. The team has moved on. The business case assumptions have aged out. The executive sponsor’s attention has shifted to newer priorities. Restarting the initiative requires relitigating a decision that everyone thought was settled. In my experience, the more common failure is subtler than the ones that get analyzed: an initiative that had genuine executive support never developed the operating-level traction to survive contact with the organization’s existing priorities. The approval was real. The commitment was real. The failure was in assuming that approval and commitment would translate automatically into changed behavior at the working level. What I have come to watch for instead is whether, in the first thirty days after approval, anything in the day-to-day operation of the business actually changes. Not the project plan. Not the governance structure. The actual work people do on Tuesday afternoon. When the answer is no, the initiative is already in trouble. The meeting that mattered happened three weeks ago, and most of the people in the room have already moved on. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
1 day ago 0 0 0 0
Preview
AI hype to AI value: Escaping the activity trap At nearly every board meeting now, CIOs are walking leadership through AI progress decks filled with familiar numbers: tools deployed, pilots underway, adoption rates rising quarter after quarter. At the same time, Gartner forecasts that global AI spending will reach $2.52 trillion in 2026, up 44% from the prior year. The investment is accelerating fast. The more important question is whether the value is keeping up? Even with all the momentum, the results are far less impressive than the activity suggests. CXOTalk research reported in early 2026 that while 88% of companies are using AI in some form, only 6% are seeing clear financial returns. One MIT study found that nearly 95% of projects fail to produce measurable results within the first six months. At the same time, pressure is building. The 2025 Kyndryl Readiness Report found that 61% of senior business leaders feel growing pressure to prove that AI is delivering value, while the Teneo CEO and Investor Outlook Survey showed that 53% of investors expect returns within six months. Taken together, these point to a growing disconnect. Boards are hearing confidence, CFOs are asking for returns and CIOs are often reporting activity instead of impact. The problem is not AI itself. The real problem is what I would call the Activity Trap: the assumption that if AI is being adopted, it must be creating value. That trap is easy to fall into because activity is much easier to measure than outcomes. Companies count how many AI tools they have purchased, how many pilots are underway and how many licenses are being used, then present those numbers as proof of progress. But more tools do not automatically lead to better business results. More pilots do not mean returns have been achieved. Higher adoption does not, by itself, create value. The board hears momentum, the CFO receives numbers that are difficult to tie to ROI and spending continues without a clear answer to the most important question: what has improved? ## 3 ways the activity trap shows up ### 1. The productivity measurement gap This pattern is already playing out in real boardrooms. Take a large U.S. financial firm that rolled out a major AI productivity platform to 40,000 employees in 2025. Six months later, 78% of licensed users had opened the tool at least once. On paper, that gave the CIO a strong story to tell the board: adoption was high, usage looked healthy and the rollout appeared successful. But when the CFO asked a simpler question, the story fell apart: what had the company gained? No one could say how much time had been saved, whether work had become faster or whether costs had come down. The company had tracked usage, but not value. There had been no baseline before deployment, no agreed method for measuring impact and no clear owner responsible for proving results. So, the issue was not that the technology failed. The issue was that the organization never defined success in business terms before it invested. And that is not unusual. In many companies today, this is exactly how AI is being approached. ### 2. Pilot purgatory — Where 73% of AI projects stay The McKinsey 2025 State of AI report suggests that nearly 73% of AI initiatives never make it beyond the pilot stage. The reason is usually not that the technology cannot perform the task. It is that the organization never clearly defined what business success was supposed to look like in the first place. Too often, pilots are designed to answer a narrow question: Can the tool do this? But that is only half the question. The more important one is: Does this create value for the business? If the pilot is not tied to a business case from the start, there is no real basis for deciding whether it deserves to move into production. This is how the Activity Trap shows up at the pilot stage. A pilot is considered successful if it ran smoothly, produced output or demonstrated technical capability. But the real outcomes that matter, such as revenue generated, cost avoided, process time reduced or risk lowered, were never defined as success criteria. So, the pilot “works,” yet the business still does not know whether it was worth doing. ### 3. The board confidence gap There is a growing gap between confidence and measurement in AI adoption. For example, a recent Logicalis report shows just how wide it is: 94% of surveyed CIOs say they are actively pursuing AI, yet 89% also admit they are still “learning as they go,” and many believe adoption is moving faster than their organizations can properly manage. And yet, success continues to be reported upward. That is where the real disconnect begins. The board hears momentum. The organization feels progress. But underneath that confidence, the actual business impact often remains unclear. No one is necessarily being misleading. This is usually not about exaggeration or bad intent. It is a more subtle problem: visible activity starts to look like measurable success. That is the Activity Trap at the executive level. The more effort an organization puts into displaying new tools, pilots, dashboards and adoption numbers, the easier it becomes to create the impression that AI is working, even when the outcomes have not been clearly defined, measured or proven. ## 5 questions that expose the activity trap Before the next AI update goes to the board, it is worth pausing and asking a few harder questions: 1. **What value did AI deliver last quarter in real terms?** Not projected benefits. Not vendor claims. Not assumed future upside. What changed in the business because of it? Did revenue increase? Did costs fall? Did turnaround times improve? Did errors decline? If those results cannot be shown clearly, then the organization may be reporting motion, not value. 2. **What was the baseline before implementation?** Every real improvement needs a “before” and an “after.” Without a baseline, even honest progress becomes difficult to prove. The story may sound persuasive, but it remains largely interpretive. A baseline keeps the conversation anchored in evidence. 3. **How much effort has gone into measuring outcomes as opposed to simply deploying tools?** Deployment is visible. It creates announcements, dashboards and board slides. Measurement is quieter work. It is slower, less glamorous and often postponed. But that is where value is either confirmed or exposed as wishful thinking. If no one is seriously measuring outcomes, the Activity Trap is already in place. 4. **How many pilots were deliberately stopped because they failed to deliver?** Every serious investment portfolio should include some efforts that were tested and discontinued. If an organization claims that none of its AI pilots failed, that usually does not signal exceptional success. More often, it suggests weak measurement or an unwillingness to shut things down. That is how zombie pilots accumulate projects that remain active on paper but no longer create meaningful value. 5. **What is being reported upward?** Outcome-based metrics or activity-based metrics? Go back and review the last few board presentations. Were leaders shown business impact, or were they shown rollout statistics, user counts and implementation updates? That pattern reveals more than the slide deck itself. It shows what the organization truly values and what it may still be avoiding. ## The escape: Outcome-first AI governance Getting out of the Activity Trap does not require better AI. It requires better governance. The first shift is ownership. Every meaningful AI investment should have a business leader accountable for outcomes, not just a technical owner responsible for implementation. Deployment matters, but deployment alone is not the point. Someone on the business side must own the question of whether the investment delivered value. The second shift is clarity before launch. Success should be defined upfront, not reconstructed later under pressure. That means identifying in advance what the investment is expected to change: revenue, cost, error rates, turnaround time, customer experience or risk exposure. If success cannot be described clearly before deployment, it will be almost impossible to measure honestly afterward. The third shift is discipline around stopping. Not every pilot deserves to become a program. Organizations need explicit criteria for continuation, scale and termination. Otherwise, they end up with zombie pilots—initiatives that consume budget, remain technically alive and create the appearance of progress without producing meaningful results. That is where governance maturity really begins: not with launching more pilots, but with assigning clear accountability, measuring what matters and being willing to stop what is not working. Recent research points to how wide this gap still is. A recent Info-Tech Research Group report found that leaders rate AI governance as highly important, but far fewer believe their organizations are executing it effectively. The companies starting to close that gap are usually the ones that make the shift early from tracking activity to tracking outcomes. That will likely be the dividing line in this next phase of AI investment. The organizations that succeed will not necessarily be the ones that deploy the most tools. They will be the ones that learn to measure outcomes early, govern AI with discipline and separate real value from visible motion. The ones that remain stuck in the Activity Trap will keep spending through one of the biggest technology investment cycles in recent memory, only to find themselves unable to answer the simplest question when finance asks: what did all this produce? And that is the deeper lesson. This is not primarily a technology failure. It is a governance failure. It starts with what gets measured, what gets reported and what gets challenged in the next board presentation. If the CFO cannot clearly explain what the AI program is worth, then the organization is not managing value. It is managing activity. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
1 day ago 0 0 0 0
Preview
The changing face of IT: From operator to orchestrator For decades, IT organizations were measured by stability, uptime, cost efficiency and service delivery. Success meant systems ran reliably, incidents were minimized and budgets were controlled. That model is no longer enough. In today’s environment, defined by cost pressure, supply chain volatility and accelerating digital expectations, the role of IT is fundamentally transforming. The modern CIO is no longer just an operator of systems, but an orchestrator of business value. ## The new mandate: Business value over technology Digital transformation was once synonymous with technology modernization. But leading organizations have learned a hard truth: Technology does not create value, outcomes do. Today, CIOs are accountable for: * Margin improvement and cost reduction * Faster product development cycles * Supply chain resilience * Operational efficiency and quality This requires a fundamental shift in mindset: “Don’t sell technology. Enable business value and let technology follow.” Every digital investment must tie directly to measurable impact EBIT uplift, working capital improvement, productivity gains, not just system upgrades. ## Run and transform: The dual engine of modern IT The transition from operator to orchestrator is anchored in a dual mandate: _Run the business + Transform the business_ **Run the business** ensures: * Secure, resilient IT and OT environments * Stable ERP and plant operations * Compliance and cybersecurity * Predictable service delivery **Transform the business** drives: * Data, AI and automation at scale * Digital capabilities across engineering, manufacturing and supply chain * Agile, product-centric ways of working The differentiator is not managing these separately but orchestrating them seamlessly together. This orchestration is what elevates IT from a support function to a strategic partner. ## From projects to products: Rewiring the operating model Traditional IT is structured around projects and technology silos. High-performing organizations are shifting to product and platform operating models aligned to business value streams. This means: * Product teams own outcomes, not just delivery * Platform teams enable reuse, scalability and speed * Business and IT operate as one integrated team The impact is significant: * Faster decision-making * Clear accountability for outcomes * Reduced duplication and total cost The guiding principle becomes simple: Standardize first. Digitize second. Scale through platforms. ## Digital thread: Unlocking end-to-end value One of the biggest unlocks in industrial enterprises is the digital thread connecting engineering, manufacturing, supply chain and commercial systems into a unified ecosystem. When connected, organizations gain: * Real-time visibility across the value chain * Faster product development cycles * Cost transparency from design to delivery * Predictive, data-driven decision-making Without this integration, enterprises operate in silos — resulting in inefficiencies, delays and margin erosion. The digital thread is not just a technology concept it is a business capability multiplier. ## AI as a force multiplier, not a side initiative Artificial intelligence is rapidly becoming embedded across every business function — but its true value lies not in isolated use cases, but in scaling intelligence across the enterprise. Leading organizations are moving beyond experimentation to: * Embed AI into core workflows (engineering, quality, supply chain) * Automate decision-making at scale * Enable predictive and prescriptive insights Examples include: * Predictive quality models reducing defects before they occur * AI-driven quoting improving margins and win rates * Intelligent supply chain analytics optimizing inventory and logistics The shift is clear: _From dashboards → to decisions → to autonomous execution_ However, AI’s success depends on two critical enablers: Trusted data and organizational adoption. ## Citizen development: Scaling innovation beyond IT One of the most powerful — and often underestimated — levers of transformation is citizen development. In a world where demand for digital solutions far exceeds IT capacity, empowering business users to build solutions is no longer optional, it is essential. Citizen development enables: * Faster identification and execution of use cases at the plant and function level * Reduced dependency on centralized IT teams * Increased ownership and adoption of digital solutions But this is not about uncontrolled proliferation. Successful organizations balance empowerment with governance through: * Standardized platforms (low-code/no-code, data, automation) * Clear guardrails for security, data and architecture * Digital champions embedded within business functions The role of IT shifts from builder to platform provider, coach and orchestrator of innovation. When done right, citizen development creates a multiplier effect, turning every function into a contributor to digital transformation. ## Observability & AIOps: Managing complexity at scale As digital ecosystems grow, so does complexity. Traditional monitoring approaches, reactive and fragmented, are no longer sufficient. The next frontier is AI-driven observability and AIOps, where: * Logs, metrics and events are continuously analyzed * Anomalies are detected proactively * Automated remediation reduces downtime This shift enables organizations to: * Improve reliability and resilience * Reduce operational cost * Build internal intelligence rather than relying on external vendors Observability becomes a core orchestration capability, enabling IT to manage increasingly complex digital environments with confidence. ## Talent, culture and leadership: The real differentiators Technology alone does not transform organizations, people, culture and leadership do. Key shifts include: * Skills * Building capabilities in data, AI and automation across the organization * Culture * Driving speed, experimentation and continuous learning * Leadership * Ensuring strong sponsorship and business-led digital adoption The most successful organizations empower business teams to identify opportunities, while IT provides the platforms and governance to scale them. ## Governance: From control to value realization Modern governance is no longer about approvals — it is about outcomes. Effective models focus on: * Alignment to business priorities * Transparent portfolio management * Continuous tracking of value (EBIT, cost, productivity) The key question shifts from _“Is this project on track?”_ to _“Is this delivering measurable business value?”_ ## Conclusion: The CIO as orchestrator-in-chief The CIO role has fundamentally evolved — from operator to orchestrator. Today’s CIO must: * Align technology to business outcomes * Integrate data, platforms and processes * Enable innovation at scale across the enterprise The organizations that will lead are not those that adopt the most technology, but those that orchestrate technology, data, AI and people into measurable outcomes. In a world of constrained budgets and rising expectations, the mandate is clear: Run with discipline. Orchestrate with intent. Transform with measurable impact. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
1 day ago 0 0 0 0
Preview
Joaquim Aguilar es el nuevo CIO de Rajapack para España y Portugal Joaquim Aguilar ha sido designado director de TI para España y Portugal de la filial española de Rajapack, especialista en distribución multicanal de embalajes, suministros y equipamientos. Aguilar ha desarrollado una sólida trayectoria en posiciones de gestión global, pues está especializado en la supervisión de proyectos y estrategias tecnológicas. Como responsable de garantizar la alineación de la tecnología con los objetivos empresariales de Rajapack, liderará la transformación digital y la adopción de nuevas tecnologías en España, con el objetivo de optimizar los procesos operativos; impulsará el uso estratégico del dato y reforzar la eficiencia y competitividad del negocio; y se responsabilizará de garantizar la ciberseguridad y la resiliencia de las infraestructuras tecnológicas, así como de promover la adopción de soluciones innovadoras. Tal y como ha explicado en un comunicado emitido por Rajapack, “mi objetivo en la compañía es acelerar su transformación digital con el impulso del uso estratégico de los datos y la automatización para mejorar nuestros procesos, reforzar la ciberseguridad y aportar aún más valor a nuestros clientes”.
1 day ago 0 0 0 0
Preview
Snowflake offers help to users and builders of AI agents Snowflake is enhancing Snowflake Intelligence and Cortex Code to create a unified experience connecting enterprise systems, data sources, and AI models with Snowflake data. It’s part of the company’s vision to become the control plane for the agentic enterprise, enabling enterprises to align data, tools, and workflows with AI agents built on its platform. With these updates, the company said, Snowflake Intelligence becomes an adaptable personal work agent for business users, and Cortex Code expands as a builder layer for enterprise AI that provides governed, data-native development. Enhancements to Snowflake Intelligence include automation of routine tasks by describing them in natural language, new Model Context Protocol (MCP) connectors, and reusable artifacts that let users save and share analyses, visualizations, and workflows, all of which will be generally available “soon.” In addition, a new iOS mobile app, and multi-step reasoning with deep research that uses agentic architecture to reason across data will soon be in public preview. The company said that all of these updates came out from customer feedback, as well as from insights gleaned from Project SnowWork, last month’s preview of an autonomous AI layer for its data cloud. Cortex Code now supports additional external data sources, including AWS Glue, Databricks, and Postgres, connectivity with other AI agents via MCP and Agent Communication Protocol (ACP), a Claude Code plugin, and a new agent software development kit with support for Python and TypeScript. There are also enhancements to Cortex Code in Snowsight, Snowflake’s web interface, including Plan Mode to allow developers to preview and approve workflows, and Snap & Ask to enable interaction with data artifacts such as charts and tables. Snowflake also announced the private preview of Cortex Code Sandboxes in Snowsight, a dedicated cloud environment where developers can execute code end-to-end with no setup. Michael Leone, VP & principal analyst at Moor Insights & Strategy, thinks the roadmap is “ambitious,” noting the number of items announced that are “coming soon” or are in public preview. “These announcements are starting to blur together, with almost every vendor claiming their agents can reason, act, and transform the business,” he said, adding, “What makes this one worth slowing down on, at least for me, is that Snowflake is going after both halves of the enterprise at the same time. Intelligence is built for the business users who want answers and actions without writing SQL, and Cortex Code is built for the builders who actually have to put this into production.” Most vendors pick one target, users or builders, and come back to the other later, he said, but Snowflake is putting both on the same governed data foundation. “[This] is a harder engineering problem, but I’d argue it’s a cleaner answer to the question enterprises are actually asking, which is how to open AI up to more people without losing control of the data underneath,” he said, noting that Snowflake has changed its approach from “let’s do it inside Snowflake,” to realizing that agentic AI only works if it’s interoperable with the rest of the stack. Igor Ikonnikov, advisory fellow at Info-Tech Research Group, also sees the control plane play as part of an industry trend. “As always, the devil is in the details: what those platforms are composed of and how they offer to control AI agents,” he said. “Most platforms are built the old-fashioned way: All the controls are coded. Snowflake speaks about reusable analytics through saving the whole solution and reusing complete modules or models. It means that common semantics are still buried inside database models and code.” All AI vendors are motivated by the same demand from the market, he said: “Move from Copilot-based generic chatbots to business-purpose-specific AI agents that understand business logic and can interact with one another.” With these updates, he sees Snowflake as having caught up with the competition, but not yet surpassing it. Sanjeev Mohan, principal at SanjMo, said, “The good news for customers is the support for Databricks and AWS Glue. What Snowflake is saying is that even if your data lives in a competitor’s system, Snowflake AI coding agent can be used. And vice versa, the VS Code extension and Claude Code plugin can be used on Snowflake data. In other words, it reduces vendor lock-in fears.” It’s also the right strategic direction, said Sanchit Vir Gogia, chief analyst at Greyhound Research. “Enterprise AI is moving from generation to orchestration to execution, and Snowflake’s focus on governed data as the foundation for action aligns with that shift,” he said. “However, becoming the execution layer for enterprise AI requires more than integrating agents and expanding tooling,” he said. It also requires consistent semantics, reliable cross-system execution, strong governance, economic viability, and organisational readiness, as well as overcoming a structural constraint. “Control without ownership of the systems where work is executed introduces dependency that is difficult to fully resolve. This is the central tension in Snowflake’s strategy and will define how far it can realistically extend its influence,” he said. “Snowflake has taken a meaningful step in that direction. It has not yet proven that it can deliver this at scale. At this stage, it is one of the most credible contenders in a race that will be defined not by who builds the smartest AI, but by who can make that AI work reliably inside the enterprise.” _This article first appeared on InfoWorld._
2 days ago 0 0 0 0
Preview
Does IT have a value problem? CIOs are challenged to communicate IT’s business value when the benefits of IT initiatives are realized in business-unit financials and workflow efficiencies. But a deeper question every CIO should ask is whether their IT department actually does have a value problem, where they might be getting things done but make little impact on business outcomes. Here, honest evaluation is key. Research consistently shows a divide between how well IT is perceived to be functioning and how business executives recognize the value delivered. Examples of IT’s sagging reputation include low executive perception of IT services and underperforming digital investments. IT operational improvements, security enhancements, and other risk-reduction programs are unfortunately recognized as core IT functions rather than strategic value drivers. Focusing IT’s value narrative on operational functions can put IT leadership positions at risk when the CFO seeks cost reductions from AI or other technology benefits. IT must deliver value through leadership that drives change, growth outcomes from AI and data initiatives, and improved experiences. “When leadership oversimplifies IT priorities like AI by reducing value to cost savings alone, it creates a flawed framework for evaluating innovation,” says Ha Hoang, CIO at Commvault. “The real opportunity isn’t just expense reduction; it’s capability expansion. And meaningful indicators of progress include improvements in operational efficiency, speed of decision-making, and customer or employee experience.” ## Value problems start with leadership The 2025 State of the CIO report highlights a part of the issue around IT’s potential value problem. Even though 82% of CIOs say their roles are becoming more digital- and innovation-focused, only 50% see themselves as business leaders. Is it a lack of confidence in collaborating with executives, gaps in understanding business operations, or a lingering cultural divide between business and IT? Whatever the cause, CIOs who show up as tech leaders first have a harder time tying investments to outcomes and communicating how IT initiatives deliver business value. There’s an urgency here, as the Digital Leadership Report finds that digital leaders expect to stay with their employer for 3.3 years — relatively little time to demonstrate impact. But it’s not just a leadership gap when business executives are underwhelmed by IT’s impact. The 2026 Technology Investment Management Report from Apptio highlights confidence gaps in technology investment decisions as well. The largest such gaps include 90% being unsure of an investment’s value or ROI, 84% distrusting the data, and 82% reporting misalignment with organizational objectives. **Recommendation:** First, IT leaders must work to communicate IT’s value in business terms, aligned with strategic drivers, not in terms of what IT is doing technically or how it’s improving operations. Second, CIOs must capture the financial impacts from trusted data and AI, not just the qualitative benefits of improving data quality or instituting AI governance. Lastly, IT leaders should develop roadmaps that deliver value with every release, rather than communicating short- versus longer-term benefits. ## AI amplifies the value gap IT’s value gap may have gotten worse over the past year, as CIOs have struggled to deploy AI experiments into production and to deliver ROI from AI initiatives. “IT never had a value problem, but it’s had a value articulation problem,” says Vikram Bhandari, chief technology and innovation officer at Riveron. “When AI ROI is framed purely as headcount reduction, IT gets boxed into a cost-center narrative. The real opportunity is using AI to scale revenue, reporting, and decision-making without linear cost growth. That’s how IT moves from cost center to strategic driver.” It’s challenging to forecast and measure non-cost returns from AI investments, such as increased revenue and market share. Additionally, many AI initiatives start as experimental POCs, and organizational learning is required to identify and pursue optimal value drivers. “Measuring ROI on AI investments is critical, even when the return isn’t fully known upfront,” says Ryan Downing, VP and CIO of enterprise business solutions at Principal Financial Group. “What matters most is creating the space to test, learn, and pressure test assumptions so leaders can see where AI truly moves the needle. The key is aligning those early insights with the broader enterprise strategy so teams can scale what works and sunset what doesn’t. Over time, the real impact comes when those capabilities allow the organization to operate differently and unlock new growth.” **Recommendation:** Focusing on AI’s productivity and workflow efficiencies can trap CIOs into cost benefits largely realized through headcount reductions. AI is only reshaping businesses, and AI agents are not yet driving digital transformation. CIOs focusing on use cases that drive revenue, deliver new products, or transform customer experiences are steps ahead of those who use AI only to optimize operations. ## Deliver business value through trusted data CIOs will have to partner with CFOs to address any perceptions that IT is underdelivering on financial expectations. But before approaching the CFO, CIOs should first partner with the CMO on AI growth initiatives. While 93% of marketers have a dedicated gen AI budget, only 8% are very confident in their organization’s AI governance, according to the report Marketers and AI: Navigating New Depths. Therein lies a challenge CIOs can address through an AI governance program that balances guardrails with strategy. Focusing only on risk mitigation is one way CIOs paint themselves back into a compliance narrative rather than being a partner in growth. CIOs need to do both, and one important way to accomplish this is to enable citizen analytics and develop trusted data products. By developing data products, CIOs can streamline much of the upfront data pipelines, governance, and management needed to deliver trusted data assets that people, tools, and AI can then use for different purposes. “Creating a data product starts with knowing when it’s justified,” says Jed Dougherty, SVP of AI and platform at Dataiku. “Look for repeatable business decisions supported by reliable, well-understood data and infrastructure capable of meeting quality and availability expectations. Measure value by linking the product to business outcomes and adoption, tracking how widely it’s used and whether it improves the decisions or processes it supports.” **Recommendation:** Product-based IT organizations developing data products aligned to AI strategies are seen as delivering business value to internal customers, with defined roadmaps and customer support. ## Value through delightful experiences Want a direct measure of IT’s value? Capture employee satisfaction (ESat) on the IT service desk, customer satisfaction (CSat) on digital tools provided to customers, and stakeholder satisfaction when delivering workflow-improving AI agents. If satisfaction and usage aren’t improving, then chances are end-users are using alternatives. Inside the enterprise, that likely results in shadow IT, an opportunity for CIOs to step in and turn around capability or usability gaps. “As CIOs, when ROI is not fully clear upfront, we focus first on the real problems teams are experiencing,” says Tomás Dostal Freire, CIO and head of business transformation at Miro. “If people create work-arounds or use unofficial tools, it is usually a strong signal that something in the workflow isn’t working. Our responsibility is to formalize what already proves effective and then measure improvements in speed, quality, or delivery.” Delivering a delightful user experience and improving user satisfaction metrics don’t happen by having engineers glued to screens and focused only on implementation. As IT departments leverage AI’s coding capabilities or adopt vibe coding, there’s an opportunity to encourage more engineers to observe how people get work done and develop their business acumen. **Recommendation:** Many SaaS platforms are overhauling their user experiences to showcase agentic AI capabilities. Workflow integrations and the use of MCP servers to connect AI agents may lead to SaaS platform evolution and consolidation. CIOs looking to demonstrate IT’s value will develop change management programs to help employees build AI literacy and transition to agentic experiences. ## Recommendations for CIOs CIOs of world-class IT organizations recognize that developing meaningful business relationships, enabling employees to experiment with AI, and promoting lifelong learning are three key building blocks to developing an IT culture focused on delivering business value. IT’s portfolio of initiatives must include roadmaps tied to growth, and CIOs must lead communications about the business value delivered.
2 days ago 0 0 0 0
Advertisement
Preview
Why the CIO is uniquely positioned to lead the digital workforce In March, OpenAI’s GPT-5.4 set a new high-water mark on GDPval, exceeding industry professionals in 83.0% of comparisons across 44 occupations. In April, Anthropic’s Claude Opus 4.7 signaled a similar advance of a 13% lift on a 93-task coding benchmark, a 14% gain on complex workflows. Anthropic’s limited-release Claude “Mythos Preview” scores an additional 13% on top of Opus 4.7 on the most challenging coding benchmark called SWE-bench Pro. This progress has changed the enterprise conversation because AI is now doing much more than generating answers. It’s at a point where it can complete meaningful pieces of work. In sharp contrast, the previous two years were defined by hesitation across the C-Suite. In early 2024, IBM found that 42% of enterprise-scale companies had actively deployed AI, while another 40% were still exploring or experimenting. In 2025, IBM reported that only 25% of AI initiatives had delivered expected ROI and only 16% had scaled enterprise-wide. McKinsey’s 2025 survey showed the same pattern at a broader level: nearly nine in ten organizations were using AI in at least one function, yet most remained in piloting or early scaling, and only 39% reported any business impact. The mood across 2024 and 2025 was distinctly shaped by curiosity and pilot activity with persistent questions about payoff. ## From pilots to production Against that backdrop, 2026 has ushered in a new era of workflow accuracy with strong gains in places where enterprises can see immediate value: spreadsheets, document-heavy analysis and software development. For example, OpenAI’s GPT-5.4, released in March 2026, is 33% less likely to be false than GPT-5.2 on a set of real-world prompts. Anthropic positioned Opus 4.7 as a model that plans more carefully and sustains longer-running work. At the same time, Deloitte reported that workforce access to sanctioned AI tools had risen from fewer than 40% to around 60% in one year, and that 85% of companies expect to customize autonomous agents for their own businesses. Markets have picked up the implications quickly. By the first week of February, stocks had lost about $1 trillion in market value as investors worried that fast-advancing AI tools could upend the sector. Part of the shock came from how directly the new systems were moving into core business workflows. Reuters reported that Anthropic launched plug-ins for legal, sales, marketing and data-analysis tasks, then added more plug-ins for investment banking, wealth management, HR, private equity, engineering and design. OpenAI, meanwhile, formed an alliance with BCG, McKinsey, Accenture and Capgemini to give AI pilots greater legitimacy and a clearer path to scale through consulting firms that enterprises already trust to guide major transformation efforts. Investors were reacting to a simple idea: software was beginning to perform work that had once lived inside teams and SaaS products. ## The CIO becomes steward of digital labor As AI takes on more structured cognitive work, enterprises gain a new layer of digital labor. Someone must decide where that labor fits, how it connects to core systems and data, how its output is measured, where human oversight remains essential and how risk and accountability are managed. Those responsibilities sit naturally with the CIO because they span the very domains the role already oversees: enterprise platforms, security, governance, integration, operating workflows and the architecture that links technology to business execution. The CIO is also one of the few leaders with visibility across functions, which makes the role especially well-suited to determining where digital labor can scale, where it needs guardrails and how it should reshape the way work gets done. The mandate now extends beyond running systems. It includes stewarding systems that increasingly execute work. This pushes the CIO deeper into business strategy. Now that AI is accurate enough to redesign workflows, the challenge has become operational, economic and organizational. Which tasks should move to agents first? Where does human judgment create the most value? Which functions benefit most from faster analysis and machine-assisted execution? The answers shape speed, margins, customer experience and competitive differentiation. In this environment, the CIO becomes one of the executives most responsible for translating technical progress into business-model advantage. The next source of advantage will come from converting company-specific judgment into executable systems. Frontier models are spreading quickly across the market, which brings up a different question: whose policies, pricing logic, approval paths, customer context and exception rules are being encoded into workflows that agents can execute with confidence? Much of a company’s edge lives inside those decisions. The CIO stands at the center of that conversion because turning institutional know-how into reliable machine action requires data access, process redesign, system integration and governance working together. As AI access broadens and use becomes routine, the CIO’s role increasingly includes leading cultural change. Teams need training, new operating norms, trusted guardrails and clear accountability for outputs shaped by AI. Roles are beginning to shift toward judgment, exception handling, taste and decision-making. The most effective CIOs will treat this as work redesign rather than a tool rollout. They will build a blended workforce in which people and digital workers are orchestrated together with intention. ## Turning AI capability into operating advantage AI’s promise is growing faster than most enterprises’ ability to capture its value. Yet only 12% of CEOs report higher revenues from AI. Given the CIO’s role as an execution leader, the gap between what the technology can do and what the business realizes is exactly where CIO leadership matters most. The enterprise needs someone who can turn AI from enthusiasm into operating discipline by selecting workflows with measurable upside, embedding governance into deployment, managing vendors and models coherently and proving that digital labor can scale safely inside the business. This is where CIOs can truly shine. The organizations that win this phase will treat AI as a managed workforce layer with standards, accountability and clear ownership. The next management discipline will look like workforce management fused with managerial accounting. Leading CIOs will track digital labor through business metrics: cost per accepted outcome, cycle-time, error and rework levels, escalation patterns and the share of output that still requires human repair. Those measures show where AI is compounding value and where it is creating hidden friction to find where human oversight continues to carry the greatest economic return. The enterprises that build this measurement layer early will scale AI with evidence, steer investment with far more precision and learn faster than competitors how to allocate work across people and machines. AI is making the next chapter of IT leadership bigger than infrastructure and more consequential than another round of digital transformation rhetoric. As software begins to perform meaningful work, the CIO becomes the steward of the digital workforce. The role now extends into strategy, growth, talent, culture and operating model design. In 2024 and 2025, enterprises were asking whether AI would ever justify itself. In 2026, the more urgent question is where AI can reshape workflow economics first. CIOs will be the executives who answer it. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
2 days ago 0 0 0 0
Preview
Ciberseguridad en el sector farmacéutico: la experiencia de Faes Farma La ciberseguridad en el sector farmacéutico es un asunto de salud pública y de continuidad operativa y no únicamente de protección de datos. En un entorno de digitalización industrial, presión geopolítica, riesgo poscuántico, inteligencia artificial disruptiva y regulación creciente, la única respuesta viable es una **estrategia de ciberresiliencia integral** , basada en prevención, detección, respuesta y recuperación, respaldada por la alta dirección e integrada en la cultura organizativa. Así lo explicó **Jaime López Ostio,** director global de TI de Faes Farma, en el evento CIO ForwardTech & ThreatScape Spain, celebrado el 16 de abril en Madrid. Más que evitar ataques, en este sector el objetivo es otro: **garantizar que la producción de medicamentos nunca se detenga.** El directivo explicó que, durante los últimos años, la superficie de ataque se ha ampliado de forma exponencial y alcanza también a la cadena de suministro. En esta línea, relató como un incidente en una fábrica de papel puede afectar a la comercialización de un medicamento, al impedir que éste pueda salir al mercado sin prospecto. También habló de la importancia de incorporar la ciberseguridad desde el diseño. “Así hemos hecho en una nueva fábrica en Vizcaya”, dijo tras apuntar las diferencias con otras instalaciones históricas cuya arquitectura tecnológica responde a paradigmas de hace décadas. López habló de las características específicas de un sector que, si bien no tiene amenazas completamente diferentes, sí presenta condiciones que amplifican el impacto. Así, habló del alto valor de los datos clínicos y personales que se manejan, de la criticidad operativa y de las restricciones regulatorias. “Lo que sí tenemos son atacantes especiales, como el grupo APT29, especializado en agredir a la industria farmacéutica”, dijo. También relató algunos ejemplos de ofensivas específicas. “Pueden cambiar las condiciones de producción de los medicamentos; por ejemplo cambiar la temperatura necesaria para ello de 12 a 18 grados. Eso no afecta al usuario final, porque las medicinas se prueban antes de comercializarse, pero arruina toda una producción”, dijo. El experto describió también nuevos vectores de riesgo que afectan al sector farma. Uno de ellos es el uso no controlado de herramientas de inteligencia artificial por parte de empleados, la ‘Shadow AI’, que supone exposición de datos sensibles, pérdida de control sobre información estratégica y riesgos regulatorios. También la mayor interdependencia de proveedores, que aumenta indefectiblemente la superficie de ataque. La evaluación de proveedores se vuelve clave no solo por continuidad industrial, sino por exposición cibernética indirecta. Garpress | Foundry > **_El director global de TI de Faes Farma señaló que algunos ciberataques “pueden cambiar las condiciones de producción de los medicamentos, por ejemplo, la temperatura necesaria para ello de 12 a 18 grados. Eso no afecta al usuario final, porque las medicinas se prueban antes de comercializarse, pero arruina toda una producción”_** ## Riesgo poscuántico El director global de TI de Faes Farma tampoco escatimó advertencias sobre un peligro que cada vez preocupa más en el sector: el riesgo poscuántico. “El ordenador cuántico para descifrar información encriptada es un riesgo real. Hay quien dice que China ya lo tiene”, advirtió. “Es un robo de información que se hace con nuestra información encriptada. El ‘recoge hoy para descifrar mañana’ parece que está bastante cerca”, dijo. López Ostio estructuró la estrategia de ciberresiliencia en cuatro pilares fundamentales: prevención y protección proactivas; detección y análisis continuo; respuesta y contención efectivas; y recuperación y continuidad operativa. En este último punto remarcó la conveniencia de aplicar la Regla 3-2-1 de copias de seguridad y realizar simulacros anuales. Respecto a la regulación, defendió que es una oportunidad para justificar inversiones, alinear negocio y tecnología, imponer estándares internos y generar un efecto cascada en proveedores (NIS2). La presión normativa, aseguró, ayuda a superar resistencias organizativas. Por último, recopiló las principales lecciones sobre ciberseguridad recibidas en los últimos tiempos. La importancia de la mejora continua; el rol crítico del factor humano; la gestión de terceros y la preparación para incidentes. “Hay que asumir que habrá incidentes, practicar escenarios e integrar la ciberseguridad en la estrategia corporativa”, concluyó.
2 days ago 0 0 0 0
Preview
The gap between SAP and its customers must not widen further SAP has taken a beating of late in the stock market due to perceptions that company’s enterprise software offerings and foothold are vulnerable to the rise of AI. Now, SAP customers are voicing their concerns — less about the replaceability of SAP platforms at the hands of AI than in terms of the AI outcomes and clarity they are getting from SAP’s platforms and vision. At last month’s German-speaking SAP User Group (DSAG) conference, the overall sentiment was clear: There is still a long way to go between SAP’s ambitious AI plans and the reality its customers face. Stefan Nogly, DSAG’s technology expert, warned in an interview with Computerwoche against further divergence — but also says he sees some progress. “We need to be careful that the gap between SAP and its users doesn’t widen further,” he says — a concern recognized by SAP itself, as SAP CTO Philipp Herzog admitted in his keynote address at the event that a significant gap exists between AI innovation and actual outcomes. “SAP intends to actively improve in this area. I am generally satisfied with the answers and the announced measures,” Nogly adds. ## The top tier: AI for IT Nogly understands why many companies remain hesitant regarding AI and SAP, as the DSAG Investment Report 2026 recently revealed. Integration of AI agents into business processes is essentially the “final stage” — and in many cases, trust, experience, and, above all, a suitable data foundation are still lacking. “We are in a phase where we have a lot to learn and try out,” said the DSAG spokesperson. From Nogly’s perspective, it makes sense to promote AI experimentation first within IT. SAP has already announced its intention to provide greater support in this area — for example, through migration tools and additional AI functions within IT and transformation processes. “Often, the initial focus is on coding support, such as through ‘Joule for Developers.’ However, there are actually many more areas of application,” Nogly explains. Especially in the context of cloud transformations, AI can significantly contribute to efficiency, for example, in adapting interfaces. Many companies have not just a few, but hundreds or even thousands of interfaces — from business-to-business to application-to-application — that need to be adapted and optimized. Here, AI can significantly increase speed and productivity. The same applies to user interface development, says Nogly. If developers can create multiple UI variants more quickly and coordinate them with the relevant departments, the benefits are immediately apparent. Overall, there is a wide range of potential applications for AI, where the added value often becomes apparent more quickly than with direct integration into business processes. ## In search of added AI value Many companies are not yet ready for such integration, however, the DSAG representative adds. The industry is currently in a learning phase, he says, with the focus primarily on gaining experience and understanding where AI actually delivers added value. To that end, Nogly recommends testing AI in a controlled manner, with clearly defined areas of application, developed step by step. More complex use cases, such as those SAP is currently strongly promoting, is not yet within reach of many companies, for example, when it comes to public cloud scenarios or the use of data products in the Business Data Cloud, Nogly adds. This level of maturity takes time to build — nevertheless, customer companies expect SAP to demonstrate a clear and practical path to get there. For that, companies primarily need planning certainty, he says — and time to uplevel operations. “This takes a bit of time, and we need to allow ourselves that time. We should consciously say: We’re trying things out and learning,” a process that also includes fundamental strategic realignment within customer companies themselves. Some pioneers closely aligned with SAP’s strategy, such as Frosta and Hörmann, have demonstrated that SAP’s approach works in principle — however, such flagship projects, highlighted in the event’s keynote, have been rather isolated. Many midsize companies, in particular, are still acting cautiously, observing costs, benefits, and risks, and waiting to see how things develop. ## A new dimension of security A key issue in this context is security. Nogly emphasizes that, for example, critical infrastructure companies in the energy, transport, and healthcare sectors already have to comply with very strict requirements under the IT Security Act (IT-SiG 2.0) — regardless of AI — while for the wider economy, the requirements of the NIS2 Directive and the BSI IT Baseline Protection serve as the benchmark. “This must become the standard practice for any company,” he says. However, AI adds an additional dimension. “Many people find it more enjoyable to talk about productivity and simplification,” says Nogly. “But we also need to know precisely what data AI accesses, whether it modifies data, and how decisions are made.” Trust in AI systems can only be built through security. Therefore, a new discipline is emerging within IT security. Nogly warns against focusing investments solely on efficiency gains: Companies must also invest in understanding the technology and its risks. “Those who only look at productivity and neglect security are missing the mark,” he says. The situation is becoming increasingly complex, especially with regard to AI agents taking on increasingly autonomous tasks and linking processes together. This development marks a new level of complexity — and significantly increases the demands on governance, control, and security mechanisms. ## AI needs data — and patience A key obstacle for companies remains the data foundation. For many SAP customers, analytics landscapes are fragmented, and a unified data layer is lacking. “This is the reality for the majority of companies,” says Nogly. With its Business Data Cloud (BDC), SAP has chosen a sound strategic approach, with concepts like data products and a semantic layer fundamentally suitable for bringing order and transparency to data landscapes. But the solution is coming late: Numerous organizations have already invested in platforms such as Snowflake or Databricks to address precisely these problems. Accordingly, the question now arises as to how existing solutions can be meaningfully combined with the BDC — without adding complexity or high costs. “This needs to be explained,” says Nogly. Introducing yet another tool is neither trivial nor inexpensive. Furthermore, he sees room for improvement in SAP’s implementation: The product needs to mature further, become more understandable, and be more accessible. Besides technological hurdles, the commercial model also plays a role. “The idea is good — but it still needs to be proven,” he summarizes. Public cloud ERP systems haven’t yet reached sufficient maturity to be a viable alternative for the majority of customers, Nogly points out, though he no longer considers implementation of SAP BTP (Business Process Transfer) to be a major obstacle. The fact that many companies still rely on on-premises or private cloud models is primarily due to the realities of the transformation process: companies have to prioritize. Often, the ERP system migration comes first, followed by facets like analytics or data platforms. “It doesn’t all happen at once,” Nogly emphasizes. Limited resources, budgets, and organizational capacity mean that the transformation can stretch over years. This context also clarifies why many AI initiatives are still in an early stage. Only when a solid data foundation exists can AI applications be used effectively and scaled. “We talk a lot about AI these days, but at the same time we’re still in an experimental phase,” says Nogly. The pace of new models and applications is rapid — but their actual implementation in companies is lagging behind. ## Pressure on IT is increasing At the same time, the pressure on IT departments is increasing. Nogly reports that many CIOs are currently being confronted by their management with AI initiatives. “Everywhere, solutions are supposed to be tested quickly,” he says. This approach often contradicts necessary foundations such as data quality, security, and governance, creating a tension between the pressure to innovate and technological reality. Regarding the question of standardization versus individualization, the DSAG representative also advocates for a clear course. The goal must be to create stable and maintainable systems. “We want to move away from a situation where a single patch terrifies an entire company,” he says. SAP has laid the right foundations with its Business Technology Platform. However, the platform and its associated extension concepts now need to be understood and consistently used in practice, says Nogly. “First, you have to fully explore its potential and learn how to use it.” This includes technologies such as Fiori and CDS Views, which will play a central role in the future. SAP has clearly confirmed this direction: “Philipp Herzog said on stage: Absolute investment protection in Fiori, in CDS Views, in this entire underlying framework. Yes, that is the future,” Nogly notes. However, this also means a profound transformation for companies, he adds. Developers must move away from classic ABAP approaches and understand and apply the new platform landscape. Once this step is completed, further expansions will still be possible — but within clearly defined guidelines. The goal is a platform approach that ensures stability and security and eliminates the fear of updates, according to Nogly. At the same time, Nogly points out that this change also takes time. SAP began its transformation 10 to 12 years ago, whereas many companies started much later — some seven or eight years ago, others only now. Consequently, their levels of maturity vary considerably. In many organizations, a fundamental rethinking of operations and development is only just beginning. The necessary technologies and expansion options are fundamentally available. But it remains to be seen whether they will be sufficient in every case. Furthermore, the possibility of integrating other solutions into modular IT landscapes still exists. ## DSAG demands more clarity, maturity and support During its Technology Days event, DSAG also compiled a list of demands, primarily calling on SAP for more clarity, maturity, and support in implementing key future topics. * For AI to truly become enterprise-ready for SAP customers, orchestrated agents, transparent decision logic, secure data, and open integration for third-party agents are needed. At the same time, DSAG expects a clearer strategic vision, simpler implementation, and investment protection for existing technologies such as Fiori. * In terms of data, the focus is on expanding Business Data Cloud. It should serve as a unified, trustworthy data layer. This requires clearly defined data products, improved cataloging, and practical migration paths to modern data architectures. * In ​​security, DSAG seeks binding best practices, clear governance models, and, above all, transparency and traceability of decisions — both technical and regulatory. * For transformations, the user group would like more concrete support: for example through funding programs, more migration tools, more practical reference architectures, and closer coordination with SAP on roadmaps.
2 days ago 0 0 0 0
Preview
Beyond the ‘25 reasons projects fail’: Why algorithmic, continuous scenario planning addresses the root causes A widely shared Template22 graphic on why projects fail prompted this article. I am using that chart as a prompt, not as evidence. The more useful question is not whether the familiar causes of failure are real. They are. The more useful question is why they keep repeating across programs, portfolios and enterprise transformations, even after years of investment in methods, PMOs, digital tools and AI. The answer, in many cases, is not a lack of effort. It is a lack of decision logic. Enterprises still launch, govern and defend large initiatives without a planning discipline capable of calculating trade-offs, exposing constraints, modeling dependencies and recalculating the impact of change quickly enough to support real governance. ## The pattern under the pattern Most discussions of project failure start with visible symptoms, unclear scope, weak requirements, scope creep, poor communication, resource shortages, unrealistic deadlines, weak sponsorship and poor change control. Those symptoms matter, but when they recur at scale, they usually point to a deeper problem in the planning system itself. In PMI’s 2025 research on the strategy execution gap, PMI President and CEO Pierre Le Manh argued that AI will create value only when organizations can translate bold ideas into executed initiatives. In most enterprises, the gap is not ambition. The gap is conversion. Strategy is declared, portfolios are funded, work begins, yet leaders still cannot calculate trade-offs, expose constraints, model dependencies or replan fast enough when conditions change. The scale of the issue is hard to dismiss. BCG’s 2024 study of large-scale technology programs found that more than two-thirds are not expected to be delivered on time, within budget and within scope, and that only 30% fully meet expectations on those three dimensions. Gartner’s 2024 survey found that only 48% of digital initiatives across the enterprise meet or exceed their business outcome targets. Those are not isolated execution misses. They are signs of systemic underperformance in how organizations prioritize, fund, sequence and govern change. Other firms sharpen the diagnosis from different directions. McKinsey’s work on successful transformations found that among companies whose transformations failed to engage line managers and frontline employees, only 3% reported success. Bain’s David Michels argues that “red is good,” meaning organizations perform better when risk is surfaced early rather than hidden behind reassuring dashboards. Deloitte’s research on digital acceleration and strategy makes the strategic requirement explicit: Digital possibilities must shape strategy, and strategy must shape digital priorities. Put together, those findings point to one conclusion. Large programs rarely fail because a single team misses a task. They fail because the enterprise cannot see the interaction of priorities, constraints, dependencies and consequences early enough to respond intelligently. ## Why this is a planning problem, not just a delivery problem At the portfolio level, failure begins when organizations select too much work, fund the wrong work or fund the right work without a realistic view of capacity, technical debt and delivery interdependencies. BCG ties poor outcomes directly to inaccurate timeline and resource planning, weak end-to-end roadmaps and ineffective management of interdependencies. That is not simply a delivery problem. It is a portfolio design problem. Forrester’s 2025 work on operating model change adds a related warning: Fewer than half of IT leaders say their organizations prioritize operating model adaptation, leaving strategy to collide with structures that are not built to absorb change. At the governance level, failure shows up as a value problem. Traditional oversight mechanisms can collect status, enforce templates and schedule reviews, yet still fail to answer the executive question that matters most: What happens if a key dependency slips, a budget is reduced or a shared team becomes overcommitted? Bain’s “red is good” matters here because watermelon reporting, green on the outside and red underneath, is usually a sign that governance is reporting milestones instead of modeling consequences. Gartner’s survey of Digital Vanguard organizations reinforces the point. The highest performing digital organizations do better when business and technology leaders are more aligned on execution and outcome ownership. At the execution level, the familiar problems remain, but they look different when viewed through a planning lens. PMI’s communications research found that one out of five projects is unsuccessful due to ineffective communication, and PMI’s later analysis of communication failures linked poor communication to more than half of the projects that fail to meet business goals. The important nuance is that communication is not merely a soft skill problem. It is often a failure to express the implications of planning decisions in a form that the business can act on. An unclear scope can be a weak scenario definition. Poor requirements can reflect commitments made before constraints were visible. Scope creep is often an unmanaged consequence. Weak sponsorship often reflects weak evidence. Poor change control often means the organization can log a change but cannot calculate its ripple effects. ## Why algorithmic planning is now a governance requirement This is where the conversation needs to become more precise. Continuous scenario planning is valuable, but it only becomes decision-grade when it is supported by algorithmic planning. In large programs and portfolios, governance cannot rely on static reporting, intuition or periodic review alone. It must be able to calculate the impact of change quickly, expose hard constraints clearly and place dependencies, capacity limits, sequencing conflicts and trade-off consequences where they belong, at the center of decision-making. Without that discipline, governance is mostly a matter of interpretation. With it, governance becomes evidence-based control. That conclusion follows directly from the documented failure patterns of PMI, BCG, McKinsey, Bain, Deloitte and Gartner. AI makes this requirement even more important. Used well, AI can be a powerful interface for senior leaders, helping them interrogate scenarios, surface anomalies, summarize risks and engage more directly with the planning environment. Used badly, it can do the opposite. If AI is not tightly coupled to mathematically sound planning data, explicit constraints, dependency logic and algorithmic calculations, it can turn supposition into false confidence. That is dangerous in portfolio and program governance, where plausible-sounding answers are not the same as decision-grade answers. The sequence matters. First, the organization needs a locked down, calculation based planning model with clear borders. Then AI can sit on top of that model as an accelerator, interpreter and executive interface. Without those boundaries, AI can easily magnify weak assumptions rather than expose them. This caution is consistent with PMI’s strategy execution framing and with EY’s 2026 CEO Outlook and Accenture’s AI reinvention thesis, both of which insist that AI must be scaled with discipline and strong foundations. Strategic intent is inherently directional. Governance must be exacting. The bridge between the two is algorithmic planning. It is the mechanism that translates ambition into modeled consequences by testing scenarios, exposing constraints, mapping dependencies and recalculating trade-offs as conditions change. Without that bridge, governance becomes subjective. With it, leadership can distinguish between what is desirable, what is feasible and what is now at risk. That is why constraints, dependencies and capacity should not be treated as soft considerations. They are the black-and-white rules of execution. AI is most valuable when it explains a sound planning model, not when it improvises one. ## Why continuous scenario planning matters Continuous scenario planning becomes strategically important when it gives leaders a way to compare options side by side, test trade-offs before they commit, expose bottlenecks early, map dependency cascades and continuously recalculate what changes when budgets, priorities or constraints shift. That directly addresses many of the structural drivers identified above. It does not solve every reason projects fail. It does attack a large share of the root causes beneath them. Seen this way, many of the familiar 25 reasons collapse into a smaller set of systemic failures. An unclear scope often results in a weak scenario definition. Poor requirements are often commitments made before constraints and dependencies were visible. Scope creep is often an unmanaged consequence. Poor communication often reflects fragmented planning logic, with business, finance and delivery working from different maps. Resource shortages are often hidden by overcommitment. Weak sponsorship often reflects weak evidence. Poor change control usually means the organization can record changes but cannot model impact. At the project level, teams can sometimes survive these problems through heroic effort. At the portfolio level, heroics stop working. Constraints win. Bottlenecks win. The question is whether leadership can see them early enough to respond intelligently. PMI’s newer M.O.R.E. framework supports this shift. PMI argues that project outcomes improve materially when organizations manage perceptions, own success, relentlessly reassess and expand perspective. Two of those ideas matter especially here. Relentlessly reassess describes a discipline of continuous adjustment as conditions shift. Managing perceptions requires communicating value and risk in ways stakeholders can act on. That is remarkably close to what mature continuous scenario planning should do at scale. ## Why the urgency is rising The pressure on CIOs is increasing, not falling. EY’s 2026 CEO Outlook says leaders are pursuing growth and adaptability through bold AI transformation, with 2026 becoming a turning point as organizations move from pilots to scaled enterprise use. Accenture makes a similar point from a different angle, arguing that organizations that build strong AI foundations will be better positioned to reinvent, compete and achieve new levels of performance. Those are reasonable claims, but they do not reduce the need for disciplined planning. Faster change increases the premium on a planning system that can calculate consequences quickly and credibly. AI can accelerate analysis, summarize scenarios and improve executive access to planning insight. It cannot replace the need to govern trade offs across budgets, capacity, architecture, timing and risk. In fact, AI is only trustworthy in this context when it is tightly coupled to mathematically sound planning data, explicit constraints, dependency logic and algorithmic calculations. Otherwise, it risks producing plausible but unsupported answers. ## What CIOs should demand For CIOs, this leads to a more useful conclusion than simply restating the 25 reasons projects fail. Large programs usually fail because the enterprise cannot see and govern the interaction of those reasons in time. A modern control system for change, therefore, needs at least six capabilities: A unified planning model across priorities, budgets and capacity; side-by-side scenario comparison; interdependency mapping; early visibility into bottlenecks; continuous recalculation as conditions shift; and executive-facing summaries that turn data into decisions. Those are the capabilities that make continuous scenario planning strategically important. The question is no longer whether planning happens. It already does. The real question is whether planning remains static, fragmented and largely narrative, or whether it becomes dynamic, scenario-based and decision-grade. That is the real fix hidden beneath the 25 symptoms. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
2 days ago 0 0 0 0
Preview
‘Reskilling’ en TI: así se aseguran los CIO que sus plantillas se mantengan al día en un mercado cambiante Desde fuera, seguir el ritmo de las nuevas tecnologías parece casi un reto imposible. Casi parece que cada mes o cada semana está apareciendo algo nuevo. Desde dentro, la actualización de conocimientos se percibe como una pieza fundamental. El _reskilling_ es la llave maestra que permite seguir los tiempos. Los CIO son quienes lideran estos procesos y quienes confirman que no son en realidad algo nuevo, sino casi algo inherente a la profesión. “En nuestra profesión la formación viene como el valor en los soldados, se presupone”, apunta al otro lado del teléfono **Gracia Sánchez-Vizcaíno, CIO de Securitas para Iberia y Latinoamérica**. “Sin formación continua los equipos se quedan obsoletos”, indica. “Somos un mundo en el que constantemente o bien están saliendo cosas nuevas o las que hay están cambiando”, concuerda para CIO España **Álvaro Ontañón, CIO de Merlin Properties**. Lo que ha cambiado ahora es la velocidad: todo va mucho más rápido de lo que lo podía hacer unos 20 años atrás. Las tecnologías emergentes se han convertido en una disrupción continua, que obliga a estar aprendiendo en todo momento qué hacer y cómo. Pero, como suma Sánchez-Vizcaíno, la velocidad no es el único reto al que se deben enfrentar en el departamento TI. También ha cambiado el espectro de gente que debe recibir la formación. Las nuevas tecnologías se han convertido en una pieza tan transversal en la actividad corporativa que ya no se trata solo de formar al personal TI, sino que también deben enseñar qué y cómo hacer a equipos ajenos. En cierto modo, ver qué áreas clave de conocimiento son las que dominan las preocupaciones de CIO y expertos en formación ayuda a visualizar el alcance de este reto. Sánchez-Vizcaíno confiesa que le interesa especialmente la IA agentiva y que todo lo que se conecta con este tema es un punto candente. “Está yendo muy rápido. Necesitamos un cambio de mentalidad, pero también de conocimiento”, señala, confirmando que el aprendizaje se hace en tiempo real. Todo cambia en paralelo a cuándo y cómo se aprende de ello. “Los puntos más candentes actualmente son la IA generativa y la ciberseguridad proactiva”, suma vía _mail_**Magalí Riera, directora del Máster Universitario en Gestión de Personas, Talento y Transformación Digital en UNIE Universidad**. En la formación de nuevas habilidades y conocimientos también permanecen clásicos como los datos y analítica o la automatización. En la lista, entran igualmente las conocidas como _soft skills_ , en las que, como indica Riera, “se está poniendo mucho foco para perfiles técnicos”. Comunicación, negociación o liderazgo se sientan así a la mesa. Ahora mismo importa especialmente “tener pensamiento crítico”, añade David González, director de negocio de IT Perm Recruitment en Hays España, algo que se conecta a las disciplinas y conocimientos en Humanidades. ## ¿Formar o fichar? La necesidad de contar con personal que domine todas estas nuevas destrezas abre, además, otro punto para el debate. ¿Sale a cuenta fichar a nuevos profesionales que ya cuenten con ellas o es mejor mantener equipos e incorporar conocimientos vía formación continua? Aquí, la percepción parece unánime. Fichar talento puede aportar muchas cosas, pero contar con un equipo establecido tiene muchas ventajas y hacer _reskilling_ dará retornos elevados. “Dentro del mercado tecnológico, el _reskilling_ no tiene que ser una opción sino una ventaja”, apunta González. El experto indica que no se trata tanto de enfrentar a uno y otro modelo, sino de ver qué aporta cada cosa y valorarlo. El mercado laboral TI ya no es tampoco el que era tiempo atrás, entregado a una suerte de carrera de fichajes. “Atraer talento es muy difícil”, apunta Sánchez-Vizcaíno, “y la formación es una retribución más”, una que, como suma, aumenta el compromiso, recicla al personal existente y lleva a depender menos de recursos externos que pueden ser caros y no generan compromiso. Contratar tiene también sus costes, que incluyen los tiempos de los propios procesos y de aclimatación y las fricciones de sumar a alguien que no conoce la cultura interna. “El coste de despedir y contratar a un nuevo empleado puede llegar incluso a triplicar el coste de realizar un proceso de _reskilling_ adecuado”, indica Riera. Y, como asegura esta experta, la actualización de conocimientos “no se limita a una simple opción de bienestar corporativo, es una estrategia vital para perdurar en el mercado”. Igualmente, cuando se tiene ya un equipo engrasado, que funciona y con perfiles y talentos variados que aportan las distintas piezas necesarias, puede merecer la pena actualizar en lugar de sumar nuevas piezas e intentar encajarlas. “El equipo te da algo que va más allá de la parte tecnológica”, apunta Ontañón. Como CIO, indica, él aporta “confianza” al equipo de dirección que le delega las responsabilidades TI, pero él, a su vez, necesita un equipo para cumplir. “Para mí, dentro de eso, la confianza es muy importante. Una vez que tienes creado un equipo y ganada una confianza, si la limitación es la tecnología —salvo que sea algo muy disruptivo, haya que partir de cero o cueste y se necesite un fichaje—, le dedicamos tiempo”. Le das el marco para incorporar esos conocimientos. ## Así debe ser un proceso de _reskilling_ “Para mí, un punto clave es hacer ese _reskilling_ desde negocio, no solo desde tecnología”, explica González. Antes de empezar a formar hay que hacerse preguntas. El experto señala que un proceso exitoso debe partir de saber qué va a pasar a corto, medio y largo plazo, promocionar áreas clave del mercado (qué vas a necesitar) y de una formación continua pero aplicada (y recuerda que existe una amplia mayoría de empresas que aseguran que la ofrecen, pero cuando se les pregunta a las plantillas solo la mitad del personal apunta que la reciben). No debe partir, eso sí, de certificaciones genéricas. Y, aunque desde la industria reconocen que los cursillos con vídeos que resultan tan clásicos en entornos corporativos pueden valer para cosas rutinarias (como las formaciones en riesgos laborales), no funcionan cuando se está hablando de nuevas _skills_. Riera recomienda “evitar el uso de métodos de aprendizaje meramente pasivos” y propone un aprendizaje basado en proyectos. Sánchez-Vizcaíno lo está viendo. El modo en el que compartimos y procesamos la información también ha cambiado y, para que todo esto funcione, “más pasar del conocimiento teórico a habilidades prácticas adaptables”. Se aprende compartiendo en canales de Teams, hablando con colegas y hasta escuchando a otras compañías. Son procesos más multidireccionales, frente a la formación unidireccional o bidireccional del pasado. “Más que nunca, un aprender haciendo”, confirma esta CIO. Y, sobre todo, generando un espacio favorable. Como explica Sánchez-Vicaino se trata de crear un caldo de cultivo proclive al aprendizaje y a la incorporación de nuevas habilidades, de “desarrollar la motivación para aprender”. “Si realmente quieres aprovechar el aprendizaje y aprender, debes tener voluntad de hacerlo y tener una afinidad con la formación que te van a dar”, incide Ontañón. En su equipo, se involucra a la plantilla en el proceso previo. “Si no hay gente interesada, no hay formación”. Es una decisión pragmática, que evita la sensación de pensar en el curso de mañana como un lastre y apuntala las ganas de hacerlo. Aunque, confiesa, ese suele ser el estado casi por defecto en el mundo TI, uno en el que se suele estar al filo de los cambios, ya incluso fuera de horario de trabajo, y con deseos de aprender. Igualmente, trabajar teniendo en cuenta intereses y necesidades ayuda a ser flexibles: este CIO cree que la selección del formador y el contenido “es clave”. “Nosotros le dedicamos a esto mucho tiempo, porque es lo que puede garantizar que sirva o no para algo”. No se trata de formar por formar, sino de responder a esas inquietudes. En un mundo en el que la información es mucho más accesible que el pasado, existen muchos recursos y muchas fuentes de conocimiento. “La parte negativa es que hay tanto que debes buscar lo que realmente te pueda interesar y adaptarlo”, reconoce. De ahí que seleccionar y hacer formaciones _ad-hoc_ funcione tan bien. ## Incorporarlo a la visión corporativa Otro punto importante a la hora de hacer procesos de _reskilling_ es comprender su impacto en el funcionamiento de la plantilla y cómo deben integrarse en las horas laborales. Así, González señala que se debe comprender que la productividad caiga en un primer momento antes de subir. “Las empresas que fracasan son las que exigen un rendimiento senior desde el principio”, apunta. Se necesita un tiempo de aclimatación, que puede incluso pasar por reforzar durante un tiempo la plantilla con externos o personal temporal. “Este aprendizaje es una necesidad, no se trata de una formación extra ni de un ‘premio’ al trabajador”, añade Riera. “Por tanto, debe formar parte de la agenda de trabajo”, suma. La profesora recomienda no llenar la jornada laboral con cursos, sino dedicarle “una parte pequeña de la jornada”. Esto no lastrará el día a día. También es crucial mantener “una comunicación clara con el equipo” de qué se hace, por qué y qué se va a ganar, como explica el experto de Hays.
2 days ago 0 0 0 0
Preview
美 정부, 연방기관에 앤트로픽 ‘클로드 미토스’ 접근 허용 추진 미국 백악관이 주요 연방 기관에 앤트로픽(Anthropic)의 고성능 AI 모델 ‘클로드 미토스(Claude Mythos)’ 접근을 허용하는 방안을 추진하고 있다. 해당 모델은 사이버 보안 취약점을 빠르게 탐지하고 이를 악용할 가능성까지 제시할 수 있는 만큼, 오남용 방지를 위한 보호 장치 마련이 병행되고 있다. 블룸버그에 따르면, 백악관 관리예산국(OMB)의 연방 최고정보책임자(CIO) 그레고리 바르바치아는 15일(현지시간) 각 부처 관계자들에게 내부 공지문을 통해 “연방 기관이 해당 모델을 사용할 수 있도록 보호 체계를 구축 중”이라고 밝혔다. 다만 구체적인 도입 기관이나 일정은 명시되지 않았다. 바르바치아는 “모델 제공업체와 산업 파트너, 정보기관과 긴밀히 협력해 적절한 가드레일과 보호 장치를 마련한 뒤 수정된 형태의 모델을 기관에 제공할 가능성을 검토하고 있다”고 설명했다. 이번 조치는 국방부가 지난 3월 3일 앤트로픽에 대해 공급망 위험 지정을 내린 상태에서 추진되고 있다는 점에서 주목된다. 해당 지정은 4월 8일 워싱턴DC 순회항소법원이 효력 정지 요청을 기각하면서 유지됐으며, 이에 따라 앤트로픽은 여전히 국방 계약에서 배제된 상태다. 반면 민간 연방 기관은 이번 조치를 통해 접근 가능성이 열리고 있다. 백악관과 앤트로픽 측은 관련 문의에 입장을 내놓지 않았다. ## 가드레일 정의가 핵심 쟁점 이번 공지문에서 언급된 ‘수정된 모델’은 실제 도입 방식과 범위에 대한 불확실성을 드러낸다. 앤트로픽은 지난 4월 7일 ‘프로젝트 글래스윙(Project Glasswing)’의 일환으로 일부 기술 및 금융 기관에 제한적으로 제공되는 ‘클로드 미토스 프리뷰’를 공개한 바 있다. 당시 앤트로픽은 내부 테스트에서 해당 모델이 주요 운영체제와 브라우저 전반에 걸쳐 수천 건의 제로데이 취약점을 발견했다고 밝혔으며, 일반 공개 계획은 없다고 선을 그었다. 시장조사업체 카운터포인트리서치의 닐 샤 부사장은 “연방 기관 도입이 정당성을 확보하려면 명확한 보증 기준이 필요하다”며 “분석 대상 소스코드는 격리된 에어갭 환경에서 관리돼야 하고, 데이터가 기본 모델 재학습에 활용되지 않도록 해야 한다”고 말했다. 이어 “버그 수정 전 인간 검토 절차를 포함하는 등 투명성과 통제 장치를 강화해야 한다”고 덧붙였다. * * * ## 기업 시장에도 확산되는 영향 이 같은 보안 기준 문제는 기업의 AI 도입 전략에도 그대로 적용된다. OMB의 이번 움직임은 연방 사이버 방어 전략이 인간보다 빠르게 취약점을 탐지하는 차세대 AI 모델 중심으로 전환되고 있음을 보여준다. 샤 부사장은 “국방부와 백악관 간 입장 차이는 강력한 AI 기술의 배포 통제가 얼마나 중요한지를 보여준다”며 “탐지, 분류, 보안, 검증, 실행 전 단계에 걸친 다층적 통제 프레임워크가 필요하다”고 강조했다. 이 같은 기술 격차는 국가 간에도 나타나고 있다. 현재 초기 접근 권한은 영국 AI 보안 연구소에만 제한적으로 제공됐으며, 유럽 주요 기관은 대부분 배제된 상태다. 만약 OMB의 계획이 실행될 경우 미국 연방 정부는 유럽보다 앞선 방어형 AI 역량을 확보하게 될 전망이다. 반면 동일 기업에 대한 국방부의 제재는 법적 절차를 계속 밟고 있다. * * * ## ‘수정 모델’로 국방부 제재 우회 앤트로픽은 수정된 모델 제공 방식을 통해 국방부의 강경한 입장을 우회하고 있는 것으로 분석된다. 샤 부사장은 “수정된 모델은 국방부의 이분법적 접근을 피해가면서도, 합의된 가드레일 내에서 민간 및 기업 환경에 안전하게 적용될 수 있는 보안 영역을 제공한다”고 평가했다. 이어 “이 방식은 향후 다른 정부 기관과 기업으로의 확산에도 선례가 될 것”이라고 전망했다. 한편 앤트로픽의 연방 접근 권한은 최근 몇 주간 변동을 겪고 있다. 캘리포니아 연방법원은 3월 26일 민간 영역에 대한 별도 지정 조치에 대해 앤트로픽의 가처분 신청을 인용하며, 관련 계약업체들이 AI 공급망을 재검토할 시간을 확보했다. 현재 앤트로픽은 군 조달에서는 배제된 상태이면서, 민간 시스템에서는 제한 조치가 일시 중단됐고, 동시에 OMB를 통한 접근 확대 논의가 진행 중이다. 이로 인해 계약업체들은 AI 모델이 실제 시스템 내 어디에 적용되는지 파악하는 데 어려움을 겪고 있으며, 이는 연방 AI 공급망 리스크 관리 전반에 영향을 미치고 있다. dl-ciokorea@foundryco.com
2 days ago 0 0 0 0
Preview
데이터브릭스, APJ 총괄에 사이먼 데이비스 선임…“4분기 85% 성장 속 지역 사업 확대” 데이터브릭스 APJ 지역은 지난 4분기 동안 전년 대비 85% 이상의 매출 성장을 기록하며 전 세계에서 빠르게 성장하는 핵심 지역으로 부상했다. 데이터브릭스는 해당 지역 투자 확대의 일환으로 현재 1,500명 이상의 인력을 고용하고 있으며, 올해 말 싱가포르 IOI 센트럴 블러바드 타워(IOI Central Boulevard Towers) 내 약 900평 규모의 신규 싱가포르 APJ 지역본부로 이전해 거점 규모를 4배로 확장할 계획이다. 데이터브릭스에 따르면, APJ 지역에서 인공지능(AI) 도입이 확산되면서 데이터브릭스의 고객 기반도 금융 서비스, 통신, 공공 등 주요 산업 전반으로 점진적으로 확대되고 있다. 한국의 삼성생명과 싱가포르 관세청, 싱텔(SingTel) 등이 새롭게 고객으로 합류했으며, LG전자와 아틀라시안(Atlassian), 호주국립은행(NAB), 도요타 등 아시아태평양 지역 주요 기업들도 이미 데이터브릭스를 활용하고 있다. 사이먼 데이비스 총괄은 싱가포르에 기반을 두고 향후 데이터브릭스의 APJ 지역 비즈니스를 총괄하며 한국, 일본, 호주 및 뉴질랜드, 아세안, 인도, 중화권 등 주요 시장 전반의 전략, 운영 및 성장을 이끌게 된다. 그는 엔터프라이즈 기술과 데이터, 클라우드 서비스 분야에서 30년 이상의 경험을 보유했으며, SAP, 스플렁크(Splunk), 마이크로소프트(MS), 세일즈포스, 오라클 등 글로벌 기업에서 주요 리더십을 맡아왔다. 최근에는 SAP 아시아태평양 총괄 회장으로 재직하며 전략, 운영, 인력, 영업, 서비스, 파트너십, 수익성 등 지역 사업 전반을 총괄했다. SAP 합류 이전에는 스플렁크에서 수석 부사장 겸 총괄을 맡았다. 데이터브릭스 최고수익책임자(CRO, Chief Revenue Officer) 론 가브리스코는 “사이먼 데이비스는 깊은 지역 전문성과 산업 통찰력뿐만 아니라 성과 중심의 팀을 구축해 온 뛰어난 역량을 갖춘 리더”라며 “그의 리더십은 가장 빠르게 성장하는 지역 중 하나인 데이터브릭스 APJ 지역의 다음 단계 성장을 견인하고, 기업들이 데이터와 AI의 잠재력을 최대한 활용해 비즈니스 혁신을 이룰 수 있도록 지원하는 데 핵심적인 역할을 할 것”이라고 밝혔다. 데이터브릭스 APJ 지역 수석 부사장 겸 총괄(SVP & General Manager for APJ) 사이먼 데이비스는 “APJ는 세계에서 가장 디지털화가 앞서 있고 AI 도입 준비도가 높은 지역 가운데 하나로, 많은 기업이 실험 단계를 넘어 실제 비즈니스 영향력을 창출하는 단계로 빠르게 나아가고 있다”며 “이처럼 중요한 시점에 데이터브릭스에 합류하게 되어 매우 기쁘다”고 말했다. 이어 그는 “데이터브릭스의 차별점은 빠른 혁신과 강력한 실행력을 결합해 고객이 데이터를 통합하고 AI 애플리케이션과 에이전트를 실질적인 비즈니스 성과로 연결할 수 있도록 지원한다는 점”이라며 “앞으로 데이터브릭스의 팀, 고객 및 파트너들과 협력해 이러한 모멘텀을 더욱 가속화하고 의미 있는 성과를 창출해 나갈 것”이라고 말했다. dl-ciokorea@foundryco.com
2 days ago 0 0 0 0
Advertisement
Preview
Adobe bets on agentic AI to rewrite SaaS for customer experience Consumer engagement has been fundamentally changing with the advent of AI agents, forcing a rethink by software-as-a-service (SaaS) companies, and creativity platform provider Adobe is responding by shifting its approach to what it calls ‘Customer Experience Orchestration (CXO).’ Announced today at Adobe Summit, the new Adobe CX Enterprise suite is a pivot to a future defined by agents rather than by software alone, where SaaS companies claim an advantage based on their deep domain expertise and troves of first and third-party data. The platform brings together customizable and out-of-the-box AI agents, Model Context Protocol (MCP) endpoints, and new intelligence systems built on Adobe’s orchestration engine. “SaaS is changing, and we are re-architecting so that we can participate in the reimagination, the redefinition of SaaS,” said Adobe VP Sundeep Parsa. **[ More Adobe Summit 2026 coverage ]** ## Agents executing with guidance from a ‘coach’ Adobe CX Enterprise builds on the company’s Adobe Experience Platform (AEP) Agent Orchestrator, which brought AI agents directly into Adobe apps. Released in 2025, AEP now powers more 1 trillion experiences annually, according to the company. AEP remains the “anchor” for Adobe CX Enterprise, which now gives customers the ability to create agent skills (reusable instructions), as well as providing specialized and customizable agents. These can be incorporated into any AI tech stack, including Anthropic’s Claude, OpenAI’s ChatGPT, Google’s Gemini, Microsoft Copilot, Nvidia’s NemoClaw, and others. Developers also have access to Model Context Protocol (MCP) servers and other infrastructure required to build customized use cases. “We’re going to make sure our applications are not trapped inside our UI layer, that they become composable services available through MCP tool calls or the A2A layer,” Parsa explained. “Customers can tap into what they have and bring that into their own unique processes, be their own UI.” He emphasized the importance of customer choice. Many enterprises are still grappling with the ‘build or buy’ question; some will prefer to create their own bespoke user interface (UI) layer, while others will have no interest in doing so. With CX Enterprise, enterprises can use pre-loaded agent skills to build custom workflows, or can launch agents pre-built for specific tasks like workflow optimization (coordinating tasks or automating handoffs) and brand governance (enforcing policies, managing permissions, tracking asset rights). And, a new Adobe CX Enterprise Coworker, to be available in the coming months, will act on specified goals and orchestrate other agents to perform multi-step actions. For instance, if a marketing team is looking to increase loyalty subscriptions by 3% in the next quarter, the CX Enterprise Coworker will work with other agents to identify relevant audience segments, surface performance insights, create a plan, and develop email copy or visual assets, Parsa noted. Once all this is approved by a human, the Coworker will then help execute the campaign and monitor results. Whereas previously agents would build an audience, then “go to sleep,” Adobe’s new CX Enterprise Coworker is “always on,” has persistent memory, and can run workflows across weeks, or even full financial quarters if required, Parsa explained. He likened the CX Enterprise Coworker to an American football quarterback, the player who directs the activities on the field, guided by a coach on the sidelines. Coworker’s coach is a marketer or a brand specialist. “We’re doubling down on this framing of customer experience orchestration,” Parsa says. ## Moving to one-on-one personalization Along with these agentic tools, Adobe is introducing two new intelligence systems: Adobe Brand Intelligence and Adobe Engagement Intelligence. Brand Intelligence is built on a fine-tuned large language model (LLM) with vision-language capabilities that learns from “qualitative and nuanced inputs” like annotations, feedback cycles, or rejected assets. “Brand intelligence is going after a much harder problem than ‘a brand kit,’ which is a codification of a CSS style guide,” Parsa explained. The LLM can begin to understand brand sentiment, informed by “data engagement signals and the actual enterprise assets.” Adobe Engagement Intelligence helps teams decide next best offers, messages, or other actions for targeted customers. This is based on their lifetime interactions, rather than click-throughs or conversions, according to Parsa. Whereas previously, less was more, “in this world, more is better,” he said, pointing out that the promise of generative AI is producing more material economically. “It’s not creating more for more’s sake, it’s targeted campaigns that get you much closer to one-on-one personalization.” Early production gains are “massive,” Parsa claimed. This is because troubleshooting and early detection of problems now takes “hours, not days and weeks.” ## SaaS companies’ data advantage Like many SaaS companies grappling with an agent-driven future where pay-per-seat models are becoming less relevant, Adobe is emphasizing its data advantage. Parsa pointed out that more than 20,000 enterprises have built on Adobe’s platform over the years, giving the company enormous amounts of data alongside domain expertise. Generative AI and AI agents do a good job of understanding the “corpus of world knowledge” and building some “useful capabilities for all of us,” Parsa acknowledged. “But these technologies stop at the enterprise walls, because those are ‘walled gardens.’” Further, enterprise context is very complicated and spread across numerous applications, he noted. “It’s codified in documents; in some cases just tribal knowledge informs how people function on a day to day basis.” AI agents working on their own (like OpenClaw or Claude Cowork) break in the enterprise because they are “brittle” and not grounded in enterprise data, he said. “We are a proxy for all of the enterprise context that lives inside our applications,” said Parsa. “We’re going to bring that into the AI layer much faster than a customer restarting that whole process with an AI platform.” Ultimately, he said, Adobe is “adapting and adjusting” to customer feedback and consumer interaction with brands, as well as with the internet itself, as customer engagement undergoes a dramatic shift in the era of AI. As this unfolds, Parsa emphasized the importance of “open, open, open.” “We absolutely are going to work with tech partners, we’re going to work with other SaaS companies to make sure that we stay flexible and meet the customer where they are,” he said.
2 days ago 0 0 0 0
Preview
The VMware deadline that could reshape your IT strategy Many VMware customers assumed the most disruptive effects of Broadcom’s acquisition were already behind them. Licensing changes and pricing shifts required attention, but infrastructure strategy largely stayed the same. That assumption is about to be tested. By October 2027, VMware customers must migrate to VMware Cloud Foundation (VCF) 9.[1] What sounds like a routine upgrade has much larger implications. The timeline is compressed. Hardware and operational changes may be required. And organizations are being pushed to rethink platform strategy at a time when they are balancing modernization, cost control, and new demands such as artificial intelligence (AI) and cloud-native development. Infrastructure transitions rarely happen quickly. Moving workloads, retraining teams, validating interoperability, and maintaining resilience all take time. When migration is mandated rather than optional, the risk profile shifts. CIOs must act within a certain window of time while still supporting critical systems and ongoing initiatives. **A shift in trust and long-term strategy** Industry analysts expect more than one-third of VMware workloads to move to alternative platforms by 2028.[2] That projection reflects a reassessment of vendor dependency and long-term cost predictability, pressures intensified by ongoing supply chain delays and rising hardware costs. Pricing changes and bundled licensing have introduced uncertainty into infrastructure planning, prompting many leaders to reconsider whether maintaining the status quo still makes sense. At the same time, modernization pressures are building. Hybrid cloud has become foundational and application teams increasingly develop in containers. AI workloads are moving from experimentation into production. These shifts require platforms that can support traditional virtualization alongside modern application models, without adding operational complexity. > **“No responsible IT person would put a new workload on VMware.”** > > – Lee Caswell, SVP, Product and Solutions Marketing, Nutanix Some enterprises are responding by containing their existing VMware environments while directing new workloads elsewhere. Others, according to Lee Caswell, SVP of Product and Solutions Marketing at Nutanix, are evaluating full platform transitions to regain flexibility and cost control. **Turning a required migration into a strategic advantage** The VCF 9 deadline is more than a compliance milestone. It offers a chance to rethink infrastructure design for the next decade. A modern platform should enable organizations to: * run virtual machines, containers, and AI workloads side by side * operate consistently across on-premises, cloud, and edge environments * preserve hardware investments and team expertise * maintain resilience, security, and operational simplicity The Nutanix Cloud Platform supports this transition by providing an enterprise virtualization foundation that integrates hybrid cloud operations, container orchestration, and AI readiness within a single operating model. Built-in migration tools and flexible deployment options can help reduce switching friction while giving organizations control over the pace of modernization. As infrastructure decisions become harder to reverse and modernization pressures accelerate, CIOs face a clear choice. They can treat the VMware deadline as a forced disruption or use it as a catalyst for transformation. For organizations willing to act early, the difference may shape operational agility and innovation capacity for years to come. Learn more by visiting www.nutanix.com/vmware-alternative/transition. * * * [1] Facing CIO backlash, VMware extends support and slows down release cycles, Gartner, Inc, Gyana Swain, July 18, 2025 [2] The CIOs Guide to Broadcom’s Acquisition of VMware, Gartner, Inc, Julia Palmer, Mike Cisek, Tony Harvey, April 3, 2024.
3 days ago 0 0 0 0
Preview
The metric missing from every AI dashboard Across industries, the conversation around AI has centered on capability. How fast can we implement it? Where can we automate? How much efficiency can we unlock? Those are reasonable questions. But they are not the only ones that matter. A recent Gartner report found that 91% of CIOs and IT leaders say their organizations dedicate little to no time scanning for the behavioral byproducts of AI use. The same research makes something else clear: Preserving the resilience and safety of the workforce in the AI era is not simply a well-being initiative. It is tied directly to productivity. As an industry, we measure performance gains very carefully. Simultaneously, we measure psychological strain much less closely. When we fail to measure something so important, something that directly affects productivity, culture and trust, that goes beyond a gap in analytics. It is a governance blind spot. That blind spot greatly concerns me. ## The invisible psychological cost of acceleration When AI systems enter workflows, the early data often looks promising: Output improves; turnaround time shortens; quality rises. What takes longer to surface is the human response to that acceleration. As AI begins handling tasks that once required deep technical judgment, employees can start to wonder, internally, what happens to the expertise they spent years building. Cognitive offloading increases efficiency, and it shifts the relationship between a person and their work. When that shift happens too quickly, even capable employees can feel a subtle loss of mastery. That feeling rarely shows up in a dashboard. Instead, it can subtly change how people show up at work. Job insecurity concerns often follow, though not always in obvious ways. It is not just about the fear of losing a role. More often, it is about uncertainty. When responsibilities blur and systems take on decision-making tasks, ambiguity increases. Many AI systems operate as “black box” models: Systems whose internal reasoning is not fully transparent. When employees are expected to act on outputs they cannot fully explain, accountability can feel heavier. If something goes wrong, who is responsible? Lack of explainability increases perceived risk, and perceived risk increases stress. Layer onto that the rise of AI-powered monitoring tools. Even when introduced with good intentions, continuous evaluation can feel different from periodic feedback. Some employees experience it as support. Others experience it as surveillance. This perception matters. Trust may start to erode until it’s razor-thin. ## The real-world impact of AI’s mental health strain Slowly, employee behavior begins to adjust to this environment. Research highlighted by HR Reporter found that when employees feel threatened by AI adoption, they may respond with knowledge-hiding behaviors instead of collaboration. Self-protection begins to replace openness. Not because people are unwilling to contribute, but because they are trying to preserve their own relevance. Motivation shifts as well. A recent Harvard Business Review study found that while generative AI improved task quality and productivity, it reduced intrinsic motivation by about 11% and increased boredom by roughly 20%. Additional research published in Behavioral Sciences suggests that sustained reliance on AI tools can alter emotional engagement with work over time. Therein lies the tension: Output improves as engagement declines. Not to mention workload issues. AI is often introduced with the promise of reducing effort. Yet as Harvard Business Review recently noted, AI does not necessarily reduce work. It can create an intensity that boomerangs back on the workforce. When friction drops, expectations expand. Employees take on more work because they can. They operate at sustained speed because the system allows for that. Unfortunately, what looks at first like efficiency can slowly become fatigue. None of these dynamics exists in isolation. They actually reinforce one another. Reduced confidence feeds insecurity. Insecurity alters behavior. Intensified workload accelerates exhaustion. And not everyone acclimates at the same pace. ## What leaders risk overlooking In many organizations, performance dashboards light up before psychological ones even exist. We track uptime, output, cost savings and deployment velocity. We rarely track confidence, perceived relevance or how long it takes someone to recover after a public error. Stress does not always present as resistance. For managers, that distinction matters. Sometimes it shows up as overextension, employees taking on more than is sustainable because they feel pressure to prove continued value in an AI-enabled environment. A manager relying heavily on AI-generated analysis may not notice that dynamic until it has already done damage. Isolation is another signal worth watching. As AI mediates more interactions, peer collaboration can quietly thin out. Work becomes efficient but less communal, and over time, that shift erodes belonging and morale in ways that don’t show up on any dashboard. Leadership itself is not immune. AI can draft performance reviews, summarize meetings and generate strategy outlines at remarkable speed. But as McKinsey has observed, while AI can write, design and code, it cannot do the hard work of leadership. Mentorship, context-setting and ethical judgment remain deeply human responsibilities. If leaders outsource too much of the relational aspect of leadership to AI systems, employees may experience a subtle loss of support. None of this happens overnight, which makes it extremely easy to miss. ## Resilience as governance Research published in Nature defines psychological resilience as the ability to recover or grow stronger in the face of adversity. Importantly, the study suggests that individuals with higher psychological resilience are more likely to maintain confidence and optimism when facing perceived career threats posed by AI. Resilience, then, is not abstract. It is measurable. It influences how people interpret change. If we accept that adaptation stress is predictable in an AI-enabled environment, then resilience cannot be left to chance. Resilience must be built into how AI is deployed from the start. That begins with clarity. When leaders are explicit about how AI will be used, what will change and what will remain human-led, speculation has less room to grow. Ambiguity answers itself quickly, and usually with anxiety. Clarity also extends to accountability. Employees need to understand where AI outputs end and where human judgment still carries responsibility. When that boundary is blurred, stress increases because no one is fully sure where decisions should live. Over time, the conversation has to move beyond protection and toward growth. Reskilling is not only about preserving roles; it signals that relevance can evolve. When organizations invest in helping people adapt alongside technology investments, they reinforce stability rather than erode it. Trust must be protected as carefully as performance. Surveillance capabilities and AI-enabled analytics should be implemented with intention and oversight. And, if we are serious about resilience, we should measure it. Just as we track deployment velocity and system performance, we can track engagement, skill confidence and recovery time after errors in high-speed environments. Behavioral byproducts are not soft signals. They influence performance as directly as any technical metric. Gartner research is direct: Preserving workforce resilience and safety in the AI era is a core responsibility, not just for well-being but for productivity itself. If 91% of CIOs report dedicating little to no time scanning for these behavioral effects, then there is an opportunity and perhaps an obligation to lead differently. Resilience should sit beside capability on the technology agenda. ## A final reflection Change has a way of exposing what we have not prepared for. When I think about the pace of AI adoption, I do not feel alarmed. I feel thoughtful. Technology has always advanced faster than our comfort with it. What matters is not whether it moves quickly; it is whether we move wisely. In moments of rapid change, it is tempting to focus only on what is measurable. Speed. Output. Efficiency. The bottom line. Those are tangible. But what often determines long-term success is less visible: Whether people feel steady, capable and trusted as the ground shifts beneath them. AI will certainly continue to improve. What is less certain is whether leaders will give equal attention to the human side of the transformation. Confidence cannot be automated. Trust cannot be generated by a model. Those remain leadership responsibilities. If we approach AI with both ambition and care, we can build organizations that are not only more capable but more durable. That is a standard worth holding. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
3 days ago 0 0 0 0
Preview
AI is scoring your job candidates. Can you explain how? Somewhere in your organization’s hiring stack, there is probably an AI system producing candidate scores. If you’re a leader who helped evaluate or approve that system, here’s a question worth sitting with: If one of those scores got challenged, by a candidate, an internal audit or a regulator, could your team explain how it was produced? Not “the vendor said it’s accurate.” Not “the model was trained on historical data.” A specific, documented explanation of what criteria were evaluated, how the candidate performed against them and why those criteria are job-relevant. For a growing number of organizations using AI video interview scoring tools, the honest answer is no. And as regulatory frameworks targeting employment AI move from guidance to enforcement, that answer is a risk. ## What the system is actually optimizing for Before asking how accurate an AI scoring system is, the right question is what it is optimizing for. Many video interview scoring platforms evaluate tone of voice, pace, eye contact, facial expressions and fluency alongside or in some cases instead of, the actual content of candidate responses. The underlying assumption is that these signals correlate with job performance or cultural fit. The evidence for that assumption is weak. The evidence that measuring these signals introduces systematic, legally significant bias is much stronger. Several major players in this space removed facial analysis features after regulatory pressure and public scrutiny. That acknowledgment — that criteria advertised as objective were neither reliable nor fair — should raise a harder question. If those criteria were in production and no one caught it until outside pressure forced a change, what else is still being measured that shouldn’t be? This is not a hypothetical risk. The EEOC has made it clear that employers are liable under Title VII for discriminatory outcomes from AI hiring tools, regardless of whether those tools were built in-house or purchased from a vendor. New York City’s Local Law 144 requires annual independent bias audits of automated employment decision tools and public disclosure of results. Illinois requires notice and consent before AI is used to evaluate video interviews. The EU AI Act, whose high-risk AI provisions take full effect this August, explicitly classifies employment AI as high-risk, with binding requirements for transparency, explainability and human oversight. The common thread: Can you explain what your AI is measuring, and can you demonstrate that it’s measuring the right things? ## The accountability problem at the executive level For technology leaders, this is where the conversation becomes concrete. Consider the scenario: A hiring decision gets challenged by a candidate, an internal audit or a regulator. The question is how the decision was made. “The AI scored them lower” is not a defensible answer in any of those contexts. It can’t be traced to specific job-relevant criteria. It can’t be explained to the candidate. It won’t satisfy an auditor. And if the system’s logic is proprietary and opaque, the organization has no way to produce a satisfying answer even if it wants to. The organizations that adopt black-box scoring tools often do so with the right intentions: To reduce human bias and create a more consistent process. Those are legitimate goals. But a system whose internal logic can’t be questioned, explained or audited just obscures bias. It doesn’t reduce it. And when bias becomes difficult to see, it becomes more difficult to address. This is a pattern you’ll recognize from other domains. When a system produces outcomes that look plausible but are wrong in ways that aren’t immediately visible, the failure compounds before it surfaces. The cost of discovering it late is almost always higher than the cost of building it right from the start. ## What a defensible architecture looks like There is a meaningful difference between AI that scores interviews and AI that scores interviews in a way that can be explained and defended. The distinction is structural. Defensible scoring starts before any candidate records a response. It starts with the job. What competencies does this role require, and what does strong performance against each competency look like? From those answers, explicit rubrics are developed. Criteria that describe what high-quality, adequate and weak responses look like for each dimension being evaluated. Those rubrics are reviewed and approved by the hiring team before scoring begins. When responses come in, the AI evaluates what candidates actually said against those pre-defined criteria. Not tone. Not pacing. Not facial expression. What they communicated, measured against a standard the hiring team set, and can explain. Criterion-level scores roll up to an overall assessment, and every part of that chain is visible and auditable. This architecture has an important secondary property: The human remains meaningfully in the loop. The AI generates a starting point by identifying relevant competencies and drafting rubric criteria from the job description, but the standard is owned by the people responsible for the hire. If a hiring manager can’t look at a scoring rubric and explain what it’s evaluating and why, it should not be deployed. That is not a burden on the tool. It is the minimum condition for using it responsibly. ## Four questions for the governance conversation For leaders evaluating or overseeing AI video interview tools, four questions surface most of what matters. 1. **What specifically is the system scoring?** Request an explicit list of evaluation criteria. If the answer includes anything beyond the content of candidate responses, ask for the validation data that connects those criteria to job performance outcomes. 2. **Are the criteria derived from job requirements?**__ Generic rubrics applied uniformly across roles create standardized evaluation, not structured evaluation, which is different. Legitimate scoring starts from the specific competencies required for the specific role. 3. **Can the criteria be reviewed, modified and approved before scoring begins?** If the rubrics are fixed and opaque, the organization is not in control of its own evaluation standard. That is a governance gap. 4. **Can any score be explained to a candidate or a regulator?** This is the accountability test. If the explanation requires “the AI said so” rather than pointing to specific, documented criteria and how a candidate performed against them, the process will not withstand scrutiny. Well-designed systems answer these questions directly. The ones that can’t are telling you something important about the tradeoffs their creators made. ## Why this moment matters The EU AI Act deadline is in August, forcing organizations with global operations or EU-based candidates to evaluate their tech. But getting this right isn’t just regulatory, it’s practical. When hiring teams can see exactly how a score was produced, they use it. When they can’t explain it, they override it or work around it, the efficiency gains disappear. The tools that will last in enterprise hiring stacks are the ones that make decisions transparently enough that the humans responsible for those decisions trust them. That’s not a high bar. But it requires being precise about what any given AI system is really measuring. And honest about whether that’s what you actually want to know. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
3 days ago 0 0 0 0
Preview
7 reasons you keep getting passed over for CIO Not every effective IT lieutenant becomes a credible CIO candidate. Those who make the leap, however, do so often by reframing their jobs from delivering what the business asks for to shaping what the business becomes. “Strong IT leaders run IT well. CIO-ready leaders focus on how the business gets better because of IT,” says Kevin Rooney, CIO at business consultancy West Monroe. “That is usually the biggest shift. It moves from execution to impact.” The distinction matters more than ever. Technology’s strategic importance has elevated the CIO role: 65% of CIOs now report directly to the CEO, up from just 41% a decade ago, according to Deloitte’s 2025 Tech Exec Survey. And 67% aspire to be CEO — more than any other tech executive surveyed. But with greater expectations come higher standards. If you’ve been passed over more than once, it’s probably not bad luck — it could be a pattern. But patterns can be broken. Executive recruiters, for one, see the same missteps over and over. And IT leaders who successfully land in the CIO chair can usually pinpoint exactly when their thinking shifted, helping them rise to the next level. Here are seven reasons aspiring CIOs fall short — and how to recognize and break free of what’s holding you back. ## You still operate as an order taker The most common gap between competent IT leaders and hire-worthy CIOs is influence. Many VPs and directors become excellent at execution: taking in requests, managing backlogs, and navigating complexity. But that same focus can limit their ability to step back and assess the big picture. “CIO-ready leaders help shape business strategy,” Rooney says. “They bring a point of view about where the organization should place bets, what tradeoffs matter, and sometimes what the company should not do.” Kelly Doyle, managing director at Heller Search Associates, a recruiting firm specializing in technology executives, often sees this gap. “To grow into the CIO role,” she says, “leaders must shift from being order-takers to being business influencers.” Changing this requires more than a different mindset — it demands action. Eduard de Vries Sands, an IT executive and advisor who has coached aspiring CIOs, recalls working with a director who excelled at execution. When de Vries Sands coached him to start proposing ideas for growth — and the director started going into the field — perceptions changed. “He was asked to present at the sales meeting, and when he used the words ‘when I was on a sales call two weeks ago,’ he was instantly seen as an executive and not just an IT director,” de Vries Sands says. ## You lead with technology, not outcomes IT leaders don’t advance to the top spot by delivering projects. They get there by changing the business. One common shortfall is the ability to have a P&L conversation without a translator. “Strong IT leaders can tell you what the technology does. CIO-ready leaders can tell you what it’s worth,” de Vries Sands says. “They walk into a board meeting and talk about margin improvement, customer retention, and revenue growth — not technical topics.” The clearest signal that someone isn’t ready is when they can’t tie their work to business value, according to Doyle. “Too often, candidates focus on activities instead of outcomes — listing projects, tools, or technical achievements without explaining how those efforts moved the business forward,” she says. The numbers underscore why this matters. Just 48% of digital initiatives meet or exceed business outcome targets, according to Gartner’s 2026 CIO and Technology Executive Survey. Organizations need leaders who don’t just implement systems but ensure they deliver results. “When a candidate leads with technology, the CEO hears cost, risk, and complexity,” de Vries Sands says. “When they lead with outcomes, the CEO hears partner.” ## You haven’t built relationships beyond IT CIOs succeed through coalition-building, not solo performance. Yet many aspiring CIOs remain “internally famous but externally invisible,” as Rooney puts it — well known inside IT but not by the people running the business. Too few IT leaders have taken the time to understand the perspectives and priorities of counterparts across the organization, Doyle says. “They know the technology inside and out, but they haven’t built the business acumen that allows them to connect the dots, anticipate needs, and translate tech investments into business outcomes,” she says. Niel Nickolaisen, an IT advisor and field CTO at Valcom Technologies, estimates that 60% to 70% of his time as a CIO is spent on relationships outside IT — members of the executive suite, their teams, the CEO, and the board. Building those trusted relationships is part of clinching the job, he says — and it might require the existing CIO’s help to create the opportunities. Internal candidates often lose out to external hires for exactly this reason, Doyle says. They “miss out because they haven’t raised their hand for broader opportunities or built relationships beyond their immediate team,” she says. “Without visible influence or enterprise-level engagement, leadership struggles to trust that they can step into the role when it opens.” ## You require certainty before acting CIOs operate in ambiguity. Boards and CEOs need technology leaders who can make progress without perfect information, framing decisions in ranges and scenarios rather than waiting for requirements to firm up. One signal that someone isn’t ready is the inability to simplify, West Monroe’s Rooney says. “If it takes 40 slides to explain a decision, the thinking is not finished yet,” he says. “At the executive level, leaders need to distill complexity into a clear direction the business can act on.” CIO-ready leaders propose, rather than wait. Every conversation, every update, every one-on-one with your CEO should answer one question, de Vries Sands says: “What changed for the business because of what we did?” ## You haven’t made yourself replaceable It sounds counterintuitive, but one of the surest paths to promotion is building a team that doesn’t need you. Part of moving up is building your team’s skills and capacity so the organization can afford to move you, Nickolaisen says. “In effect, you need to become replaceable,” he says. It’s the difference between hero mode and leadership, Rooney says. CIOs don’t succeed by personally solving every problem. “They build teams and systems that consistently produce results,” he says. “Leadership at that level is about creating repeatable impact.” CIO-ready leaders build teams that believe in them, surface ideas, and deliver consistently, Heller Search’s Doyle adds. “Those are the capabilities that transform a strong IT leader into a credible CIO candidate,” she says. ## You assume industry doesn’t matter A common mistake IT leaders make when positioning themselves for the top job is assuming their technical expertise translates universally. In the era of AI, a deep understanding of sector-specific processes is increasingly critical. CEOs want technology leaders who already grasp the nuances of their business and can immediately add value, Doyle says. “Aspiring CIOs should think strategically about which CEOs would benefit from a conversation with them — those are the leaders they should be engaging with on a job search,” she says. The goal is to demonstrate understanding of the challenges, priorities, and broader landscape of that specific industry. Preparing for a CIO opportunity isn’t just about highlighting past accomplishments. “It’s about showing you understand the business context, can speak the language of the executive team, and can translate your experience into meaningful outcomes for that organization,” Doyle says. This doesn’t mean you can’t move across industries; it means you need to do the homework. Frame your experience to clearly articulate how your perspective applies to their specific challenges. Generic leadership credentials aren’t enough when boards are betting on AI transformation and need someone who understands their operating model. ## You can’t tell the story Technical expertise is table stakes. What separates candidates from those who land the job is the ability to translate that expertise for non-technical stakeholders. A CIO must tell the story of technology in a way that resonates — contextualizing the complex and leaving acronyms at the door, Doyle says. “The role today requires someone who can bridge worlds: business strategist, storyteller, and translator of value,” she says. A common mistake is talking about what you have done instead of what you have changed, de Vries Sands says. “Resumes full of implementations, deployments, and rollouts — those are activities,” he says. “The question every CEO is asking is: Did the business perform better because of you?” He offers a reframe: Not “I implemented SAP S/4HANA.” Instead: “I led an enterprise transformation that delivered $12 million in annual benefits and went live on time and on budget, which is rare for programs of that scale.” The storytelling skill extends beyond self-promotion. CIOs need to articulate what AI means for the workforce, what a platform investment will enable, and why a particular tradeoff makes sense. The role, especially with the rise of AI, is increasingly centered on communication and “humanness,” Doyle says. “CIOs need the trust and support of leaders across the business, and that requires executive presence, humility, and investment in their team’s success,” she says. ## Start acting like a CIO — now On one point, everyone agrees: The behavior must come before the title. The role doesn’t create impact — it recognizes the impact someone is already making, Rooney says. “The strongest future CIOs build mechanisms that deliver outcomes across the organization, and they do it through their teams,” he says. “One win can happen by chance. A system of wins shows leadership.” De Vries Sands puts it more bluntly: “I have seen technology leaders at the VP level who were already operating as CIOs in every meaningful sense,” he says. “They had the relationships, the credibility, the business fluency. The title was a formality. The ones who wait for the title to start behaving like a CIO rarely get it.” What does demonstrating readiness look like in practice? Lead with value, not technology, and speak the language of the business as fluently as you speak the language of IT, Doyle says. “Go into every conversation already understanding the KPIs, goals, and pressures facing your business partners,” she says. “Cross-functional thinking is non-negotiable — CIOs must be students of the entire business.” Finding a mentor or coach and proactively seeking opportunities that expand your scope can also help. “If someone aspires to the CIO role, they should seek out responsibilities that stretch their capabilities, expose them to cross-functional work, and round out the competencies expected of a modern CIO,” Doyle says. The clearest sign of readiness is when the organization starts treating you like a CIO, Rooney says. “Often the fastest way to get the job is when the enterprise starts acting like you already have it,” he says. “Leaders across the business begin pulling you into decisions because they trust the perspective you bring.”
3 days ago 0 0 0 0
Preview
Living off the Land attacks pose a pernicious threat for enterprises Living off the Land attacks have become one of the most persistent and difficult threats facing enterprise security teams. Unlike traditional intrusions that rely on custom malware or obvious exploits, these attacks weaponize the tools organizations already trust and depend on every day. PowerShell, Windows Management Instrumentation, PsExec, scheduled tasks, bash scripts and other native utilities become part of the attack surface. These attacks succeed not because defenders lack tools, but because defenders still assume that legitimate activity is inherently safe. This approach allows adversaries to blend seamlessly into normal operations. Instead of triggering alerts tied to malicious binaries or known signatures, Living off the Land techniques exploit legitimate administrative functionality to move laterally, escalate privileges and quietly exfiltrate data. From the attacker’s perspective, the goal is simple: operate within the environment’s rules rather than break them. As enterprises expand their use of cloud services, automation frameworks and hybrid architectures, the reliance on native system tools continues to grow. The same capabilities that enable scale, resilience and efficiency also create ideal conditions for stealthy intrusions. Recent threat intelligence reports show that a majority of modern attacks now incorporate Living off the Land techniques, underscoring how quickly this tradecraft has become the norm rather than the exception. For CIOs, the concern is not just that these attacks are hard to detect. It is that they exploit the very mechanisms used to keep systems running. Whether managing critical communications infrastructure at a federal agency (which one of us did as CIO of the FCC for 4 years) or overseeing enterprise IT operations, the tension remains constant: Administrative tools are simultaneously essential for operations and attractive targets for adversaries. Blocking these tools outright is rarely an option without disrupting critical business functions. The result is increased dwell time, higher remediation costs, reduced visibility into attacker intent and a steady erosion of trust in traditional security controls. High-profile Advanced Persistent Threat (APT) actors such as Salt Typhoon illustrate how sophisticated adversaries can conduct long-running operations using little more than system native capabilities. With sufficient knowledge of enterprise environments, attackers can persist for months while appearing indistinguishable from legitimate administrators. Evan recently observed a Living off the Land incident at a major telecommunications provider that highlights this challenge. Security rules initially blocked a set of IP addresses believed to be malicious. Those addresses turned out to be valid customer premise equipment. Disabling them degraded customer performance and created operational risk, while the attacker activity continued elsewhere using legitimate tooling. This kind of misalignment between security signals and business reality is increasingly common because of Living off the Land scenarios. ## Organizations most at risk from Living off the Land attacks Every enterprise is vulnerable to Living off the Land attacks because the techniques rely on standard operating system functionality rather than specialized software. That said, organizations that operate complex, distributed or mission-critical environments face disproportionately higher risk. Critical infrastructure providers such as utilities, telecommunications networks and transportation systems are especially exposed. These environments often include devices that haven’t been patched or updated in years and can lack even basic controls that we take for granted today. They depend heavily on high-privilege administrative tools to manage uptime, safety and regulatory compliance. The geopolitical implications are significant: Adversaries targeting critical infrastructure increasingly use Living off the Land techniques precisely because they understand that defenders cannot simply disable the tools that keep essential services running. Financial institutions face similar exposure across trading platforms, payments infrastructure and identity systems where automation and remote management are deeply embedded. Hybrid environments further expand the attack surface by increasing the number of endpoints, identities and trust relationships attackers can exploit. The more administrative paths that exist between systems, the easier it becomes for adversaries to mimic expected behavior while advancing their objectives. The growing use of general-purpose GenAI and jailbroken (WormGPT) large language models by attackers compounds the problem. Automation scripts that once required deep technical expertise can now be generated, modified and adapted quickly. This lowers the barrier to entry and accelerates the spread of Living off the Land techniques across a broader range of threat actors. Ultimately, any organization that relies heavily on PowerShell, WMI or similar orchestration frameworks must assume that these tools will be targeted. The question is no longer whether Living off the Land techniques will be used, but whether the organization can identify malicious intent before meaningful damage occurs. ## Best practices for combatting Living off the Land attacks ### Hardening native system tools without breaking operations The first step in addressing Living off the Land risk is hardening the system tools most commonly abused by attackers. This requires a careful balance. These tools are essential for IT operations, so controls must reduce abuse without undermining legitimate use. Effective hardening begins with tightening how and when administrative tools can be executed. Constraining scripting environments, enforcing signed scripts, reducing unnecessary functionality and applying least privilege access principles all limit the opportunities available to attackers. Many organizations discover that privileges have accumulated over time in ways that no longer align with current operational needs. Hardening also includes disciplined configuration management. Attackers frequently exploit misconfigurations rather than software vulnerabilities. Regular audits of system settings, administrative permissions and automation workflows can eliminate gaps that quietly expand the attack surface. However, CIOs should be clear-eyed about the limits of hardening. These measures reduce exposure but do not prove intent. A well-configured PowerShell environment can still be misused by a compromised credential or a malicious insider. Hardening raises the bar for accessing systems. But if a bad actor cracks a login, having advanced controls in place doesn’t really do much to reduce the havoc they can wreak. ### Continuous monitoring that understands behavior Continuous monitoring is essential for fighting Living off the Land activity. Uncovering context is huge here. What matters in Living off the Land scenarios is understanding how and why a tool is being used. A PowerShell command executed by the right account at the wrong time or in the wrong sequence may be far more significant than an obviously unusual event that lacks context. SOC teams need consolidated visibility across administrative tools, identities, systems and timing. Is a script being executed outside normal maintenance windows? Is a privileged account accessing systems it rarely touches? Are administrative actions chaining together in ways that suggest lateral movement rather than routine management? Context transforms noise into signal. Without it, security teams are flooded with alerts that reflect operational complexity rather than attacker intent. This leads to alert fatigue and missed opportunities to identify early-stage intrusions. Continuous monitoring must also account for the reality of hybrid environments. Visibility gaps between cloud services and on-premises systems create blind spots attackers are quick to exploit. Unified telemetry that spans these domains is critical to understanding how activity in one area influences risk in another. ### Giving SOC teams the time and mandate to hunt proactively Even with strong hardening and continuous monitoring, Living off the Land attacks often evade purely reactive defenses. Their subtlety requires proactive hunting by skilled analysts who understand attacker tradecraft and business context. SOC teams are frequently overwhelmed by routine operational alerts, compliance reporting and administrative overhead. When every hour is consumed by triage, there is little capacity left to search for the faint signals that indicate an emerging Living off the Land intrusion. Effective hunting focuses on intent rather than anomalies. Analysts look for patterns that suggest goal-oriented behavior, such as repeated credential use across systems, subtle privilege escalation or administrative actions that create future access rather than immediate impact. This work requires deep familiarity with how the business actually operates. Analysts must understand which workflows are normal, which are rare and which should never occur. That knowledge cannot be encoded entirely in rules or automated systems. Overall, the most resilient organizations are those that empower SOC teams to think like adversaries while staying grounded in operational reality. This changes detection from a reactive effort into a form of continuous validation that systems are behaving as intended. ## Adapting security strategy to a Living off the Land world Living off the Land attacks represent a long-term evolution in how adversaries operate. As defenses improve, attackers increasingly choose the path of least resistance by abusing trusted tools rather than introducing foreign code. This shift demands a corresponding evolution in security strategy. Perimeter-centric models are no longer sufficient on their own. Enterprises must assume that some level of compromise is inevitable and focus on reducing dwell time and limiting impact. Adapting to this reality requires shifting focus from tools to behavior and from individual events to intent over time. Hardening reduces exposure, but it does not explain why actions are occurring or how they connect. What matters is the sequence of events, their timing and the context across identities and environments. In a Living off the Land world, zero trust must be extended beyond authentication events and enforcement points. The path forward is not chasing every new tool or threat, but understanding how attackers operate, how systems are actually used and how security can align with real business operations. As environments grow more complex, no human analyst can reason about every possible behavior in isolation. Security strategies must evolve to recognize intent at scale, or risk falling behind attacks designed to hide in plain sight. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
3 days ago 0 0 0 0
Preview
Why bizware is becoming the dominant form of software Since the early 1950s, software has slowly moved from an obscure technical discipline to something that touches almost every person’s life every day. The transition was gradual at first. Most people didn’t have direct access to computers, but the businesses they interacted with did. Computers sat in back rooms quietly changing how companies handled inventory, accounting and customer relationships. Computing accelerated in the 1980s and 1990s. The computer went from an obscure machine to something sitting on everyone’s desk and, eventually, their homes. At a minimum, people needed basic computer skills to complete everyday tasks. Over the last 20 years, computing has evolved even further. It is no longer just a utilitarian tool; it is a fundamental part of daily life. Whether that’s good or bad is debatable, but it’s the reality we live in. And that reality requires massive technological infrastructure. Where businesses once needed buildings, now they also need websites. To explain what this has done to software, it helps to look at another trade. A skilled carpenter can build a beautiful mahogany table, cabinet or chair. Some spend decades mastering joinery, shaping, finishing and countless other techniques. With enough experience, they can build almost anything. But homes are also built out of wood, and homes must be built in enormous quantities. There is massive economic pressure to build them quickly, efficiently and at scale. It would not be practical to build houses the same way master furniture makers build cabinets. The objectives are different. Home construction must happen quickly, with minimal waste, while still meeting building codes and safety standards. It is still carpentry, but it is a different discipline with different constraints. The same thing has happened with software. The massive economic demand for digital infrastructure has created a new category of software work that operates very differently from traditional software engineering. Standing up the technology required to keep modern society running does not require deep knowledge of computer science or the inner workings of computers. Instead, it requires understanding a large ecosystem of specialized tools that assemble the components businesses need. It is still software, but software shaped by business infrastructure. This isn’t traditional software, but it is still a kind of software. I call it **bizware**. ## Software has split into two disciplines This distinction becomes clearer when you look at how teams have transitioned in organizations. Traditional software teams are often organized around deep technical problems: building a compiler, optimizing a database engine or designing a new algorithm. Progress is measured by correctness, performance and innovation. Bizware teams focus on something different. Most businesses now are not trying to develop software; instead, they need to deploy software to run their business. They are typically organized around business functions: payments, authentication, internal tools, customer dashboards or analytics pipelines. The goal is not to push the boundaries of computing, but to assemble reliable, secure systems quickly using existing components. This difference in orientation changes how success is measured. In traditional software, elegance and efficiency matter. In bizware, speed, reliability and integration matter more. The system does not need to be perfect; it needs to work consistently and support the business. ## Bizware is driven by business infrastructure, not computer science Many traditional concepts of computer science are not central to bizware. Concepts like Von Neumann architecture, NP-completeness or decidability are rarely relevant. Instead, it is far more important to understand authentication systems, infrastructure tooling, security frameworks and deployment pipelines. This has created an entire ecosystem of tools that primarily exist to solve business infrastructure problems. Docker is a good example. Docker solves a deployment problem that businesses face. It does not solve a universal computing problem. _Building_ Docker required deep software expertise, but the people _using_ Docker are leveraging it to solve the business problems that arise from large-scale deployment. The rise of platforms like Docker and Kubernetes reflects this shift toward operational software. These tools exist because companies need consistent environments across development and production. In the beginning, these tools were hard to use. The computers were slow and the software infrastructure was comparatively primitive. A person had to understand the tools and have a significant traditional software background to effectively and efficiently use the tools. As the tools have matured, the knowledge of traditional software development has become less relevant. To deploy your website globally, you no longer need to understand what NP-Complete means or the nuances of von Neumann architecture. However, outside of business environments, deployment is rarely a major concern. Students, researchers and hobbyists rarely struggle with deployment the way companies do. In contrast, tools like compilers or interpreters are universal; everyone writing software needs them. Software has effectively undergone a kind of speciation, and a new, distinct discipline has emerged. Bizware and traditional software engineering require different skill sets. Both are difficult and require significant expertise, but they emphasize different types of knowledge. Being excellent at one does not automatically make one excellent at the other. That distinction also explains where AI is currently being applied. AI struggles with traditional software development. It is not even close to replacing engineers doing deeply technical traditional software work. For example, if I wanted to design a domain-specific language to describe Kalman filters, AI would be almost useless. That task requires deep understanding across multiple technical fields and the ability to combine them creatively in ways that have never existed before. At the same time, the market for that kind of work is relatively small compared with the need businesses have for bizware. Bizware also operates under very different economic pressures than traditional software. Businesses need digital infrastructure at enormous scale. These systems must be built quickly, reliably and repeatedly across thousands of organizations. Because the problems are highly repetitive, automation becomes practical and extremely valuable. AI can often produce a reasonable starting point because the patterns are well-known and widely reused. This also explains why discussions about AI often become confusing. AI is not impacting all software equally. It is far more effective in domains where problems are repetitive and patterns are well understood. That aligns closely with bizware. In contrast, traditional software development often involves creating something fundamentally new. That kind of work still requires deep expertise and cannot be easily automated. I explored a related dynamic in my analysis of why hardware and software development fail, where mismatched assumptions between disciplines create systemic problems. Understanding where AI applies and where it does not becomes much easier once the distinction between bizware and traditional software is clear. ## Economic pressure is reshaping how software is built Further, this scale has created strong incentives to standardize and automate as much of the process as possible. Cloud platforms, infrastructure frameworks, containerization and orchestration systems exist primarily to solve these operational problems. Traditional software development is different. It focuses on building new computational capabilities: compilers, algorithms, operating systems, simulation tools and domain-specific systems that push the boundaries of what computers can do. Traditional software development solves software problems. Bizware solves business problems. As a result, we’ve experienced a speciation of expertise and a separation of disciplines. ## Why this distinction matters for companies This divide helps explain many of the tensions inside modern technology companies. Engineers who excel at one discipline are often assumed to be interchangeable with those in the other, even though the skills and objectives are quite different. The market for bizware is enormous. Capitalism constantly pushes toward optimization. That force becomes stronger as the market grows larger. We are seeing the same thing in construction. Companies like Reframe Systems are now building robots designed to automate large parts of home construction. The economic pressure to optimize never disappears. While skilled carpentry is still critical, homebuilding has become commoditized. Bizware isn’t a lesser form of software, just as framing a house isn’t a lesser form of carpentry than building fine furniture. They simply exist to serve different economic needs. Understanding that distinction clarifies what modern software development has become. Software hasn’t disappeared. But the industry that once revolved around computer science now also revolves around operating digital infrastructure at enormous scale. For companies, this distinction has practical implications. This is not really a technical distinction. It is an operational one. Hiring and team organization are focused on keeping the infrastructure running while also keeping it up to date. Before the internet, this used to be the purview of the store managers who needed to keep the store clean and accessible. What used to be physical infrastructure is now digital infrastructure. Traditional software is not extinct, and it is not dying. If anything, it is more important than ever. However, it can feel that way because the scale of traditional development has been completely eclipsed by the scale of bizware. This speciation has already happened; I’m just trying to give it a name. That way, people, businesses and organizations can all agree on what they are doing and what they want to do, because confusion around concepts like software and bizware costs money. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
3 days ago 0 0 0 0
Preview
Managing AI agents and identity in a heightened risk environment Geopolitical tensions are rising. Cyber threats are accelerating. And AI is rapidly expanding the enterprise attack surface. For CIOs and CISOs, the reality is clear: cybersecurity is no longer a defensive function alone. It is now a core element of enterprise resilience. The question leaders should be asking is not simply whether their systems can prevent attacks, but whether their organizations are prepared to detect, contain and recover when something inevitably goes wrong. Ransomware attacks, identity compromise and AI-enabled threats are becoming more sophisticated and more frequent. In this environment, the enterprises that succeed will be those that rethink how security operates from the ground up. ## From prevention to resilience For years, enterprise security strategies focused on prevention. The goal was simple: keep attackers outside the perimeter. But that model no longer reflects today’s reality. Modern security strategies increasingly assume that adversaries may already be inside the network, including sophisticated external threat actors that can circumvent even the best perimeter defenses, as well as insider threats. This shift – from perimeter defense to continuous detection and response – is changing how security teams approach everything from infrastructure monitoring to AI deployments. AI agents, in particular, introduce new layers of complexity, becoming a new category of insider threat. While these systems can automate workflows and unlock significant productivity gains, they can also introduce new vulnerabilities if not carefully governed. We’ve already seen examples of AI agents behaving unpredictably or making flawed decisions in real-world deployments. Even when systems function as designed, they can create new operational and regulatory risks if guardrails are not in place. For example, AI agents have deleted entire codebases, approved buggy code, lied to customers and generated unexpectedly large cloud computing bills. For enterprise leaders, the takeaway is straightforward: AI governance must be a core security discipline. Poorly managed deployments can lead to reputational damage, regulatory exposure, financial loss and operational disruption. In addition to these internal AI risks, external AI-driven threats are increasing dramatically. Realistic deepfakes, automated phishing campaigns and advanced ransomware have shown that traditional prevention strategies are no longer sufficient. The good news is that new tools are emerging to help address these risks. AI-native detection and remediation combined with digital forensics and incident response platforms are enabling organizations to detect and respond to threats faster. These platforms analyze massive volumes of telemetry and behavioral data, helping security teams identify anomalies before they escalate into full-scale incidents. ## Identity is the new perimeter If there is one area where the attack surface has expanded dramatically, it is identity. As organizations adopt cloud infrastructure, SaaS applications and distributed work environments, identity has become the primary gateway to enterprise systems. Attackers know this, and they increasingly target identity systems as the most efficient path into corporate networks. That is why Zero Trust identity architectures are becoming essential. Zero Trust assumes that no user, device or system should be automatically trusted. Every request must be verified continuously and access granted based on context, behavior and risk signals. One piece of this solution is Multi-Factor Authentication (MFA), which should be standard across the enterprise. In addition, modern security platforms increasingly analyze behavioral data to verify human users and identify abnormal activity. Signals such as keystroke rhythm, geolocation data, time-of-day data and device motion can greatly improve identity accuracy. Equally important is strong privileged access management (PAM). Elevated privileges should be granted only when necessary and revoked immediately after use, shrinking the vulnerability surface area to the minimum required at any time. This is even more critical today as AI agents have identities and privileges that are unlikely to be required 24/7. An emerging trend is correlating data across the various security and posture management silos, including identity (ISMP), cloud (CSPM), application (ASPM) and data (DSPM). With this, organizations can build unified risk profiles that provide a clearer view of risk and incident progression. This approach allows security teams to map the full pathway of a potential breach from compromised assets to affected applications, users and exposed data. If a vulnerability appears in an engineering environment, for example, security teams can quickly trace how that exposure could cascade through infrastructure, applications and user accounts. If a user (or AI agent) is compromised, the relevant at-risk data, applications and cloud environments can be identified. That level of visibility is becoming essential as enterprise environments grow more complex. ## APIs: The backbone of AI — and a major risk As organizations accelerate AI adoption, APIs are becoming a critical layer of enterprise infrastructure, including the use of Model Context Protocol (MCP) as an orchestration layer. AI systems rely heavily on MCP and various APIs to interact with applications, services and data sources. That means APIs are now one of the most important and most vulnerable components of the enterprise security stack. A recent API Threatstats report showed that more than 35% of AI vulnerabilities involve APIs. When APIs are poorly secured, they can expose sensitive data, internal logic and authentication mechanisms. For CIOs leading AI initiatives, this makes API and MCP security a foundational requirement. Organizations must ensure that APIs are continuously monitored, authenticated and protected against misuse. In many cases, the success or failure of an AI deployment will hinge on how well its API infrastructure is secured. ## Preparing for rogue AI agents Last month, I touched on the rise of autonomous or semi-autonomous AI agents in this column. These systems can perform tasks ranging from software development to customer service to infrastructure management, but their capabilities also introduce new security questions: How should organizations manage identity for AI agents? How should their actions be monitored? And how can enterprises prevent unauthorized or rogue agent activity? Security strategies must now account for the possibility that AI agents are being manipulated, misconfigured or even intentionally designed to behave maliciously. The rapid adoption of new AI tools is amplifying these concerns. Examples abound in recent months. There are numerous instances in which AI agents, despite their sophisticated algorithms, made poor decisions, exposing significant liabilities for their deployers. Platforms such as OpenClaw, one of the fastest-growing AI tools introduced this year, have also spread so quickly that some organizations are restricting their use until stronger safeguards are implemented. At the same time, smaller companies are gaining access to powerful AI capabilities that were previously available only to large enterprises. That democratization of AI will drive innovation and also increase the potential attack surface across the digital ecosystem. ## The CIO imperative AI adoption is accelerating across every industry. Enterprises are integrating AI agents into development pipelines, business operations and customer engagement systems. But with this opportunity comes responsibility. For CIOs, the priority is not simply deploying AI technologies; it is deploying them securely. This means strengthening identity governance, securing APIs, monitoring AI behavior and investing in platforms that provide real-time visibility into enterprise risk. Organizations that navigate this shift successfully will be those that treat cyber resilience as a strategic capability rather than a compliance exercise. In an era of intelligent systems and autonomous agents, security must go beyond protecting the perimeter; it’s about managing trust across every identity, every API and every system operating inside the enterprise. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
3 days ago 0 0 0 0
Advertisement
Preview
CIOはいかにして、望ましい未来への針路を定めるか 未来は、自分たちにはどうにもならない巨大な力によって決まるものではない。未来は人間の選択の中にある。テクノロジーを使うすべての人はこのことを認識しておくべきだ。 20世紀のフランスの哲学者アンリ・ベルクソンはこう言った。「人間は自分の未来が自分の手の中にあることを、十分に認識していない」。私たちは一人ひとり、未来がどうあるべきかについて発言権を持っている。 未来学の研究者たちは、社会が未来に対する主体性を放棄しつつあることに懸念を呈している。一部の著名なCEOたちが「宇宙移住」「仮想現実」「AI支配」という未来を声高に語っている。問題は、多くの人がそれを黙って受け入れていることだ。IT業界の人間が声を上げなければ、未来は気づかぬうちに奪われる。 未来を「ただ迎える」のではなく「自ら作る」ことは簡単ではない。見えない先を合理的に考え、選択肢を絞り込む作業が伴うからだ。そして、CIOはその議論を組織の中で起こせる立場にある。 以下に、その妨げとなる5つの課題と、CIOにできることを示す。 1. **主体性の欠如——「未来は自分たちが決める」という信念** 未来はまだ決まっていない。5年後、10年後、50年後の世界は、今の選択次第だ。組織のスタッフやユーザーにそう信じてもらうのは難しいが、ストーリーテリングや対話、協働を通じて「自分にも変化を起こせる」という感覚を育てることができる。 2. **想像力の欠如——ユートピアとディストピアの間にある無数の選択肢** 未来の形は無限にあるはずなのに、人の想像はどうしても両極端に向かいがちだ。理想郷か、文明崩壊か、監視社会か——その間にある膨大な可能性が見落とされる。Google Xの元デザイン責任者Nick Foster氏は、少なくとも4つのシナリオを描くことを勧めている。「起こり得る未来」「あるべき未来」「ひょっとしたら起こる未来」「あってはならない未来」の4つだ。 未来を語るには、もっと具体性が必要だ。インターネットやSNSが登場する前、監視社会やフェイクニュース、アルゴリズムによる差別がここまで深刻な問題になると予測した人はほとんどいなかった。今日の現実を直視し、その先に何が起きうるかをステークホルダーと一緒に考える——それがCIOの役割だ。 3. **未来への注意力の欠如——「未来を考える時間」を意図的に確保する** 未来について考え、議論する時間を確保しているだろうか。確保していたとしても、多くの場合、その議論は「最悪の事態」か「最良の事態」に集中しがちだ。しかし実際に起こる未来は、その間のどこかにある。 Toyota Motors USAの元CIO、Barbara Cooper氏は、5年後・10年後の「日常の平凡なリズム」にチームの目を向けさせた。壮大なビジョンではなく、普通の人々の普通の一日を想像することから始めたのだ。 これを参考に、思考実験としてこんな問いを投げかけてみてはどうだろう。1000年後、私たちは何をしているのか。3026年はどんな世界なのか。 4. **情熱の欠如——未来への無関心と向き合う** 私たちが集合的に向かっている方向に、人々は本当に関心を持っているだろうか。さまざまな世代で広がる無力感や虚無主義、高い無関心、低い制度への信頼、孤立と離反——これらが未来を作る努力の妨げになっている。 CIOは、ステークホルダーが本当に気にしている問いに応える未来の物語を作ることができる。「この未来の中で、自分はどこにいるのか。自分の状況を自分でコントロールできるのか。意味のある何かの一部になれるのか」。そこに応える物語があれば、人は自ら動き出す。 5. **状況認識の欠如——今の行動が明日の方向を決める** 多くの組織は、自分たちが今どこに向かっているかを把握できていない。今日の行動が、明日の方向を決める。立ち止まって周りを見渡す時間を意図的に作ることが重要だ。スタッフやステークホルダーと対話すれば、必ず気づきがある。
3 days ago 0 0 0 0
Preview
Data centers are costing local governments billions Tax benefits for hyperscalers and other data center operators are costing local administrations billions of dollars. In the US, three states are already giving away more than $1 billion in potential tax revenue, while 14 are failing to declare how much data center subsidies are costing taxpayers, according to Good Jobs First. The campaign group said the failure to declare the tax subsidies goes against US Generally Accepted Accounting Principles (GAAP) and that they should, since 2017, be declared as lost revenue. “Tax-abatement laws written long ago for much smaller data centers, predating massive artificial intelligence (AI) facilities, are now unexpectedly costing governments _billions of dollars_ in lost tax revenue,” Good Jobs First said. “Three states, Georgia, Virginia, and Texas, already lose $1 billion or more per year,” it reported in its new study, “Data Center Tax Abatements: Why States and Localities Must Disclose These Soaring Revenue Losses.” While taxpayers may be aggrieved at the tax advantages being dished out to these corporations and the loss of revenue, enterprises looking to run data centers are being offered a lot of favorable terms and are in a good position to benefit from the incentives. Management consultant PWC has pointed out that companies can reap the rewards of a variety of tax breaks for data centers. Outside the US, other countries are happy to provide financial breaks to data center operators too: the UK can offer 100% tax relief on energy saving technology while Brazil also provides an element of relief for the operation of data centers. _This article first appeared on Network World._
5 days ago 0 0 0 0
Preview
Robot Zuckerberg shows how IT can free up CEOs’ time Mark Zuckerberg, the CEO of Meta, is building an AI version of himself. The virtual CEO is being trained on Zuckerberg’s mannerisms and will be loaded with his views on corporate strategy, the Financial Times reported. The idea is that employees will find the virtual Zuckerberg more accessible than they would the flesh and blood manifestation. There are plenty of claims that AI will lead to jobs being eliminated but, until now, the CEO job has looked safe. If Zuckerberg’s experiment proves successful, though, even company leaders could be due for the chop. In February, OpenAI’s Sam Altman warned that CEOs could be as vulnerable as other senior executives. “AI superintelligence at some point on its development curve would be capable of doing a better job being the CEO of a major company than any executive, certainly me,” Altman said. “On our current trajectory, we believe we may be only a couple of years away from early versions of true superintelligence.” Klarna CEO Sebastian Siemiatkowski has already tempted fate, using an AI version of himself to present the company’s financial results to analysts, and even to take customer calls. So far, though, he’s kept his job. _This article first appeared on Computerworld._
5 days ago 0 0 0 0
Preview
UK wants to build sovereign AI — with just 0.08% of OpenAI’s market cap The UK government has created a Sovereign AI investment fund with up to £500 million (US$675 million) to spend on turning UK startups into national AI champions. Its support could involve investments of up to £20 million per startup, or provision of up to 1 million GPU-hours of AI compute, and fast-tracking of visas to bring skilled workers to the UK. The multi-million-pound budget sounds impressive, but it’s just 0.08% of OpenAI’s recent $852 billion valuation. That company just received fresh investment of $122 billion, dwarfing the UK’s sovereign fund. Closer to home, that £500 million would buy about 5% of French AI startup Mistral, which has achieved its success by offering a European alternative for businesses that do not want to use American or Chinese AI providers. The UK government does not have a great record when it comes to investing in national IT champions. In the 1960s and 1970s, the government ran the National Enterprise Board which provided funding to new technology companies, but even the biggest names helped in this way have slipped out of UK ownership: ICL, a mainframe challenger to IBM, eventually became part of Japan’s Fujitsu, while Inmos, an early innovator in parallel computing, is now part of Dutch chip giant STMicroelectronics. _This article first appeared on Computerworld._
5 days ago 0 0 0 0
Preview
Oracle delivers semantic search without LLMs Oracle says its new Trusted Answer Search can deliver reliable results at scale in the enterprise by scouring a governed set of approved documents using vector search instead of large language models (LLMs) and retrieval-augmented generation (RAG). Available for download or accessible through APIs, it works by having enterprises define a curated “search space” of approved reports, documents, or application endpoints paired with metadata, and then using vector-based similarity to match a user’s natural language query to the most relevant of pre-approved target, said Tirthankar Lahiri, SVP of mission-critical data and AI engines at Oracle. Instead of retrieving raw text and generating a response, as is typical in RAG systems that rely on LLMs, Trusted Answer Search’s underlying system deterministically maps the query to a specific “match document,” extracts any required parameters, and returns a structured, verifiable outcome such as a report, URL, or action, Lahiri said. A feedback loop enables users to flag incorrect matches and specify the expected result. Lahiri sees a growing enterprise need for more deterministic natural language query systems that eliminate inconsistent responses and provide auditability for compliance purposes. Independent consultant David Linthicum agreed about the potential market for Trusted Answer Search. “The buyer is any enterprise that values predictability over creativity and wants to lower operational risk, especially in regulated industries, such as finance and healthcare,” he said. ## Trade-offs That said, the approach comes with trade-offs that CIOs need to consider, according to Robert Kramer, managing partner at KramerERP. While Trusted Answer Search can reduce inference costs by avoiding heavy LLM usage, it shifts spending toward data curation, governance, and ongoing maintenance, he said. Linthicum, too, sees enterprises adopting the technology having to spend on document curation, taxonomy design, approvals, change management, and ongoing tuning. Scott Bickley, advisory fellow at Info-Tech Research Group, warned of the challenges of keeping curated data current. “As the source data scales upwards to include externally sourced content such as regulatory updates or supplier certifications or market updates that are updated more frequently and where the documents may number in the many thousands, the risk increases,” he said. “The issue comes down to the ability to provide precise answers across a massive data set, especially where documents may contradict one another across versions or when similar language appears different in regulatory contexts. The risk of being served up results that are plausible but wrong goes up,” Bickley added. Oracle’s Lahiri, however, said some of these concerns may be mitigated by how Trusted Answer Search retrieves content. Rather than relying solely on large volumes of static, curated documents that require constant updating, the system can treat “trusted documents” as parameterized URLs that pull in dynamically rendered content from underlying systems, according to Lahiri. ## Live data sources This enables it to generate answers from live data sources such as enterprise applications, APIs, or regularly updated web endpoints, reducing dependence on manually maintained document repositories, he said. Linthicum was not fully convinced by Lahiri’s argument, agreeing only that Oracle’s approach could help reduce content churn. “In fast-moving domains, keeping descriptions, synonyms, and mappings current still needs disciplined owners, approvals, and feedback review. It can scale to thousands of targets, but semantic overlap raises maintenance complexity,” he said. Trusted Answer Search puts Oracle in contention with offerings from rival hyperscalers. Products such as Amazon Kendra, Azure AI Search, Vertex AI Search, and IBM Watson Discovery already support semantic search over enterprise data, often combined with access controls and hybrid retrieval techniques. One key distinction, between these offerings and Oracle’s, according to Ashish Chaturvedi, leader of executive research at HFS Research, is that the rival products typically layer generative AI capabilities on top to produce answers. Enterprises can evaluate Trusted Answer Search by downloading a package that includes components such as vector search, an embedding model to process user queries, and APIs for integration into existing applications and user interfaces. They can also run it through APIs or built-in GUI applications, which are included in the package as two APEX-based applications, an administrator interface for managing the system and a portal for end users. _This article first appeared on InfoWorld._
5 days ago 0 0 0 0
Preview
Secure-by-design: 3 principles to safely scale agentic AI AI is moving from experimentation to execution. What started as copilots is quickly evolving into autonomous AI agents that can make decisions, execute tasks, and operate across enterprise environments. As organizations accelerate adoption of agentic AI, they’re expanding their attack surface in ways traditional security models weren’t built to handle. AI agents interact with identities, APIs, workloads, and data across environments, and attackers who can compromise these agents can also reach an organization’s sensitive resources and assets. This is where a secure-by-design approach becomes critical. Security can’t be layered on after AI agents are in use. It must be built into how AI systems are developed, deployed, and adopted. Industry efforts, including a recent collaboration between CrowdStrike and NVIDIA, are helping define what it means to secure autonomous agents at scale. Three principles stand out. ### 1. Treat AI agents as privileged identities AI agents behave like users but operate at a speed and scale no human can match. They access systems and trigger workflows in real time, which makes them a high-value target. If compromised, an AI agent can give an adversary legitimate access to move quickly across environments, creating a new attack path that security teams can’t afford to ignore. Organizations need to treat AI agents as privileged identities from day one. This means enforcing least-privilege access, continuously monitoring behavior, and correlating activity across identity, cloud, endpoint, and additional security domains. Teams require full visibility into what these agents are doing and the ability to stop suspicious activity immediately. ### 2. Secure the full AI lifecycle Most security efforts today focus on the build phase, especially protecting models and training data. That’s necessary, but not sufficient on its own. The real risk often shows up in production, where AI agents are interacting with live environments. AI agents are deeply connected systems. They rely on APIs, integrate with cloud services, and operate across production workloads. Every connection increases the potential blast radius if something goes wrong. A secure-by-design approach must span the full lifecycle—from build to runtime – to ensure models and data are protected, policies are enforced at deployment, and behavior is continuously monitored once agents are live. Runtime protection is the gap many organizations underestimate. If an AI agent is manipulated or abused, teams need to detect and respond in real time. ### 3. Use AI to defend against AI-driven threats Adversaries are already using AI to move faster, automate attacks, and evade detection. Defending against them requires meeting speed with speed, and AI is the critical component to deliver that defense. By combining real-time telemetry with AI-driven analytics, organizations can surface subtle and unknown signals that point to compromise. Correlating activity across identity, cloud, endpoint, and data environments helps expose threats before they escalate. This kind of cross-domain visibility is critical because modern attacks don’t stay contained – they move laterally, blend into normal operations, and exploit gaps between tools. AI-powered security helps close those gaps and keep pace with the adversary. ### Building AI with confidence Agentic AI is reshaping how work gets done, from automating complex processes to accelerating decision-making across the enterprise. But it also introduces a new class of risk that traditional approaches weren’t designed to address. Organizations that build security into the foundation of their AI systems will be able to move faster with confidence. Those that don’t will be left reacting to threats operating at machine speed. Secure-by-design AI isn’t about slowing innovation – it’s about enabling it. By treating AI agents as identities, securing the full lifecycle, and using AI to stop advanced threats, organizations can scale AI without scaling risk. To learn more about CrowdStrike, visit here.
6 days ago 0 0 0 0
Preview
No sólo la IA marca la transformación digital de los sectores clave: pulso a 5G, ‘edge’ y ‘cloud’ Sí, la inteligencia artificial (IA) está ahí, pero todo lo que se trató ayer en cuanto a su desarrollo en el CIO ForwardTech & ThreatScape Spain celebrado en Madrid no sería posible sin infraestructuras como el 5G, las comunicaciones, la computación en el _edge_ y los modelos _cloud_. Así lo pudieron de manifiesto **Israel Devesa, director general digital & tecnólogo del Grupo Aldesa; Carlos Garriga, CIO del IE Business School; y Rubén Andrés Priego, director general de Tecnología, Operaciones e Innovación de Singular Bank, **en un panel moderado por Esther Macías, directora editorial de CIO y COMPUTERWORLD en España**.** Empresas las tres primeras muy diferentes, pero en las que aquellas tecnologías mencionadas juegan un papel fundamental. Israel Devesa comenzó su intervención centrándose en la parte pública y en cómo se comunica la evolución del ámbito energético. “Resulta clave explicar de forma clara cómo se está transformando este sector. El enfoque tradicional, bastante centralizado, de la red eléctrica en cuanto a cómo debe producirse la energía dentro de España, está evolucionando hacia un modelo más distribuido, donde cobra importancia el _edge_. En este contexto, es fundamental contar con infraestructuras capaces de analizar la información en tiempo real directamente en el ‘borde’. En nuestro caso, los aerogeneradores o los paneles solares producen una enorme cantidad de datos que generan activos, a partir de los que se toman decisiones relacionadas con la producción, su optimización o incluso el movimiento de activos. Por tanto, no toda la información debe viajar a sistemas centrales; muchas decisiones deben tomarse localmente para ganar rapidez y eficiencia. Este es un claro ejemplo de cómo el 5G y el _cloud computing_ forman un conjunto indivisible en la forma de gestionar la información dentro de las organizaciones”. Israel Devesa, director general digital & tecnólogo del Grupo Aldesa Garpress | Foundry. Por su parte, Rubén Andrés Priego centró su análisis en dos vertientes. La primera que destacó es el desarrollo de soluciones basadas en los nuevos DNI electrónicos con tecnología NFC en el ámbito de la banca digital y el _onboarding_. “Este sistema permite, mediante un simple contacto con el móvil, leer los datos del documento, incluyendo certificados validados por la Policía Nacional, así como la imagen del DNI. Esto supone un cambio significativo respecto a los procesos tradicionales, que requerían grabar vídeos, tomar fotografías y pasar por validaciones manuales por parte de equipos de operaciones”, explicó. En segundo lugar, citó la telemetría. “A través de los dispositivos móviles es posible recoger información comportamental sobre cómo operan los usuarios. Esto permite detectar situaciones anómalas, como operaciones que cambian de localización de forma imposible en pocos segundos, o identificar si la interacción la realiza un humano o una máquina. Gracias a esta información, se pueden ajustar dinámicamente los niveles de seguridad, reduciendo la fricción para el usuario en situaciones normales y reforzando los controles cuando se detecta un mayor riesgo”, añadió. Carlos Garriga puso como ejemplo cómo la torre del IE Business School está sensorizada para detectar cómo es el uso de los espacios en el día a día. “Casi monitorizas experiencias. Esto empezó con el cocid, que permitió repensar y redefinir aulas y procesos”. En cuanto al 5G y _edge_ , Garriga puso ejemplos como un proyecto piloto de gafas de realidad virtual para toda la actividad académica, “que permite llevar la educación a una nueva dimensión”. ## La importancia de las conexiones satelitales Preguntado por la cuestión, Priego explicó que “tras el corte eléctrico del año pasado, los centros de datos continuaron funcionando con normalidad, pero las oficinas físicas se quedaron sin conexión a internet. Como respuesta, se desplegó la solución de Starlink basada en conectividad satelital en toda la red de oficinas y sedes centrales. Esto permitió garantizar la resiliencia del sistema, mantener la operatividad en situaciones críticas y explorar soluciones combinadas de conectividad y autonomía energética”. En este sentido, Devesa añadió lo importante que es contar con conectividad cuando el 5G falla. “En entornos de obra, que suelen estar en ubicaciones remotas, la conectividad es un reto clave para integrar sistemas, monitorizar operaciones y garantizar la seguridad. A diferencia de entornos urbanos, estas zonas carecen de infraestructuras como fibra o red estable, lo que dificulta la comunicación. Para resolverlo, se recurre cada vez más a soluciones como redes satelitales de baja órbita (por ejemplo, Starlink), que permiten mantener la conectividad durante largos periodos. Además, existe una complejidad adicional en el ámbito OT frente a IT, debido al uso de protocolos menos estandarizados como Modbus, lo que exige soluciones y equipos especializados”, por lo que “en este contexto, el principal desafío del 5G no es la tecnología en sí, sino su despliegue en este tipo de entornos”, añadió. ¿Se puede vivir sin 5G?, preguntó Esther Macías a Carlos Garriga. “Más que el 5G en sí, lo crítico son sus capacidades, especialmente la baja latencia, que resulta esencial para aplicaciones como IoT o experiencias inmersivas, donde el tiempo de respuesta es clave”, respondió. De igual manera, también quiso dar su opinión en cuanto a comunicaciones satelitales, aunque reconoció que “aún no están completamente desplegadas” en el IES Business School, “han cobrado relevancia tras situaciones de crisis, donde garantizar la continuidad de la comunicación y la toma de decisiones es fundamental, especialmente para mantener informados a equipos y usuarios”. “En este contexto —dijo para concluir su opinión en este sentido—, soluciones emergentes como el 5G satelital directo a dispositivos se están explorando como una opción eficaz para asegurar la conectividad de personal crítico y la continuidad operativa, incluso en escenarios adversos”. A la derecha, Rubén Andrés Priego, director general de Tecnología, Operaciones e Innovación de Singular Bank. Garpress | Foundry. ## Con la ciberseguridad hemos topado Israel Devesa certificó que la ciberseguridad es fundamental en este mundo hiperconectado. “La ciberseguridad se ha convertido en una preocupación creciente en el sector energético, especialmente tras incidentes recientes que evidencian la vulnerabilidad de las infraestructuras. En entornos como parques eólicos o fotovoltaicos, donde la operación es cada vez más remota, aumenta la exposición al riesgo, ya que el acceso a sistemas críticos se realiza a distancia”. “A diferencia de otros sectores —precisó—, el ámbito OT ha ido más rezagado que IT en materia de seguridad, pero esto está cambiando, impulsado en gran parte por nuevas regulaciones que exigen mayores niveles de protección y responsabilidad, incluso a nivel de dirección”. En opinión de Devesa, aunque históricamente la inversión en ciberseguridad en renovables ha sido muy baja, la creciente conciencia del riesgo y el cambio regulatorio están impulsando un aumento progresivo. “La ciberseguridad pasa así de ser un aspecto secundario a convertirse en un elemento clave y estratégico dentro de las organizaciones del sector energético”, dijo. En este caso concreto, Rubén Andrés Priego reconoció que su compañía está apostando con fuerza por la inteligencia artificial, con soluciones como asistentes basados en modelos de lenguaje para apoyar a los banqueros, que ya muestran una alta adopción en el día a día. Sin embargo, estos modelos presentan limitaciones en términos de disponibilidad y fiabilidad, lo que puede suponer un riesgo, especialmente en situaciones críticas como la interacción directa con clientes. Por ello, se plantea el uso de _edge computing_ para desplegar modelos en local, reduciendo la dependencia de sistemas externos y mejorando la resiliencia y la continuidad del negocio. Además, se destaca que muchas tareas específicas podrían resolverse mejor con modelos locales en lugar de modelos generalistas, optimizando así el rendimiento y la eficiencia”. Carlos Garriga, CIO del IE Business School Garpress | Foundry. ## El papel de la tecnología en el desarrollo de los sectores Para finalizar el panel, Esther Macías preguntó a estos tres expertos por el papel que tendrá la tecnología en el desarrollo de los sectores donde están enclavadas sus empresas, así como su propia función. “Lo que vemos, y ha pasado ahora con la guerra de Irán, es que la infraestructura hay que diseñarla desde el peor escenario posible. El tema de las guerras o los ataques terroristas es una realidad. Tenemos que buscar formas de replicar nuestra infraestructura; por ejemplo, en Dubái, donde de hecho cayeron restos de un misil encima de un CPD de AWS y hubo varios bancos que lo pasaron bastante mal. Es decir, hay que tener infraestructura redundante, estar preparados para lo peor que pueda ocurrir. Y, por otro lado, ser atrevidos: lanzarnos con la IA y con todas estas tecnologías de última generación en España. Parece que hay mucho miedo alrededor de la IA. Yo creo que ese nerviosismo viene más del desconocimiento que de lo que realmente puede generar”, explicó Rubén Andrés Priego. Por su parte, Andrés Devesa respondió que “lo fundamental es verlo como una oportunidad”. “En nuestra compañía —continuó— decimos mucho que el sector de la construcción, y en general el industrial, lleva años de retraso en comparación con sectores como la banca o incluso el educativo, como el Instituto de Empresa. Pero precisamente ese retraso es una oportunidad. Con la IA y las nuevas infraestructuras, se está reduciendo esa brecha entre el mundo industrial y otros sectores. Y en el caso de la construcción, probablemente el retraso sea aún mayor, lo que hace que el potencial de mejora sea significativo”. Y es que, dijo, “aunque la digitalización es necesaria, la inversión siempre se analiza mucho en este sector. Aun así, los costes tecnológicos están bajando, lo que facilita avanzar. Para nosotro, la clave es entender todo esto como una oportunidad. Para finalizar, Carlos Garriga admitió que “no tanto a nivel de nuestra industria, la educación superior, sino más en el rol de la función tecnológica dentro de cualquier industria, creo que es una mezcla de oportunidad y reto”, para desarrollar así su explicación: “Estamos afrontando un periodo de fuerte redefinición de todas las industrias, pero especialmente del papel de la tecnología. ¿Cuál va a ser el rol de los departamentos de tecnología cuando el desarrollo tecnológico se está descentralizando? Probablemente, el foco se desplace más hacia la gestión de infraestructuras, la gestión de riesgos y el _compliance_. Muchas veces digo que el CIO o el CTO del futuro va a asumir muchas de esas funciones menos “atractivas”, pero críticas. Al mismo tiempo, vamos a ganar protagonismo como habilitadores de soluciones tecnológicas abiertas, que luego serán utilizadas por las áreas de negocio”.
6 days ago 0 0 0 0
Preview
The 10 skills every modern integration architect must master Enterprise integration has changed fundamentally. What was once a backend technical function is now a strategic capability that determines how quickly an enterprise can adapt, scale and innovate. With SaaS-first architectures, continuous ERP updates, event-driven systems and AI-enabled platforms, integration architects are no longer just connecting systems — they are designing the digital nervous system of the enterprise. I have spent years implementing large-scale cloud and middleware implementations, particularly across Oracle EBS, Oracle Fusion Cloud and various SaaS ecosystems. What I’ve observed is that the gap between good and great Integration architects isn’t technical knowledge alone, it’s the breadth of skill, judgment and organizational influence an architect brings to every engagement. The following ten competencies define what separates a modern integration architect from a traditional middleware specialist. ## 1. Platform thinking, not project thinking **What many get wrong:** Designing integrations to satisfy a single project — an ERP rollout, payroll go-live or CRM deployment — without considering reuse or long-term evolution. **Why this fails:** SaaS platforms like Oracle Fusion Applications refresh weekly and quarterly. Project-based integrations break repeatedly and accumulate technical debt at a punishing rate. **Modern skill:** Adopting a Cloud Integration Platform Mindset, where iPaaS (e.g., Oracle Integration Cloud) is treated as: * A shared enterprise platform * An abstraction layer between SaaS and consumers * A long-term capability, not a temporary solution * A source of reusable integrations with long-term enhancement opportunities The skilled architect also knows that not every integration belongs on the iPaaS platform. High-volume, low-latency integrations might perform better with direct API calls or message queues. Highly complex data transformations might be more maintainable in custom code. The integration architect makes deliberate choices about which integrations belong on which platform based on technical requirements, team skills and long-term maintainability. Modern architects need both strategic understanding of when these platforms add value and the tactical skills to use them effectively. ## 2. Mastery of iPaaS and cloud-native capabilities **What many get wrong:** Using iPaaS as a visual mapping tool while ignoring native cloud capabilities. **Why this fails:** Over-customization increases cost, reduces resilience and bypasses built-in scalability. **Modern skill:** Deep understanding of integration patterns and architectures. Integration architects must understand the fundamental patterns that govern how systems communicate — these patterns represent proven solutions to recurring challenges and knowing when and how to apply them is essential. This means knowing how to leverage iPaaS features before writing custom logic: * Adapters vs REST endpoints * Lookups, packages and integration patterns * OCI services such as Streaming, Object Storage and Functions As enterprises migrate to the cloud and adopt hybrid architectures, integration architects must understand cloud platforms and their unique constraints. We increasingly work in multi-cloud environments, which means designing patterns that work across providers. Rather than building cloud-specific integrations, the architect establishes cloud-agnostic interfaces. Using platform-neutral API formats like JSON for data interchange ensures portability. On a recent HCM implementation, I replaced a polling-based integration pattern with an OIC and OCI Streaming event-driven approach for HR updates. The result was dramatically lower latency and a significant reduction in load on Oracle HCM during peak processing windows. ## 3. API-led and event-driven design **What many get wrong:** Exposing SaaS applications directly to consumers through tightly coupled integrations. **Why this fails:** Schema changes, API version updates and new consumers create cascading failures that ripple across the entire landscape. **Modern skill:** Designing API-led and event-driven architectures that genuinely decouple systems. APIs have become the primary integration interface for most modern systems. Integration architects need deep expertise in designing APIs that are intuitive, efficient and maintainable. Consider what I faced when tasked with exposing customer data from a legacy system. A naive design required multiple calls to retrieve related information, one for basic customer details, another for addresses, another for contact preferences and another for order history creating a chatty interface and increased latency. Every extra call compounds latency and couples the consumer to internal data structures. A well-designed API while utilizing mediation capabilities of Integration tools encodes resource relationships so the consumer retrieves what it needs in a predictable, minimal number of call requests. The middleware orchestrates calls to backend systems, aggregates the data and exposes a single, consumer-friendly endpoint. This approach reduced round trips, decoupled consumers from backend structures and improved performance by enabling parallel processing. I also considered trade-offs like payload size and introduced selective expansion to avoid over-fetching. Overall, the design aligns with consumer-driven API principles and leverages mediation capabilities effectively. ## 4. Canonical data modeling and data governance **What many get wrong:** Mapping source-to-target schemas directly for every integration. A common anti-pattern is point-to-point schema mapping — directly transforming source data into target formats for every integration. At first, this seems fast. But it doesn’t scale. **Why this fails:** This approach creates a fragile, tightly coupled ecosystem as A single schema change in one system triggers updates across multiple Integrations, Integrations grow from N systems → N² mappings, Inconsistent data definitions – “Customer,” “Account,” or “Contact” may mean different things across systems. Over time, teams spend more effort fixing integrations than delivering value Every system change requires multiple downstream updates, creating a maintenance nightmare that compounds over time. **Modern skill:** Integration architects increasingly need data engineering skills as the lines between integration and data platforms blur. We often serve as the primary advocates and implementers of master data management strategies. Modern integration architects don’t just move data — they define and govern it. Define System of Record (SoR) while establishing authoritative ownership for each data attribute to avoid conflicts and duplication. Defining canonical enterprise data models and enforcing governance through versioning, reusability, security, validation rules, error handling & centralized control at the middleware layer is how we solve that problem at scale. Enable Controlled Data Propagation by defining how updates flow like Event-driven (real-time sync) or Batch (scheduled reconciliation). In modern architectures, integration architects increasingly act as data stewards, enabling scalable MDM strategies and ensuring consistency across distributed systems through centralized mediation layers like OIC A canonical ‘Employee’ model I’ve defined for a large financial services client allowed Oracle HCM, multiple payroll providers and identity systems to evolve independently. During a significant HCM upgrade, integration breakage was near zero because the canonical model absorbed the schema changes rather than propagating them to every consumer. ## 5. Security-by-design in integration **What many get wrong:** Treating integration security as a configuration step late in the project. **Why this fails:** Integration layers handle sensitive payroll, financial and identity data — and are frequent attack vectors. Retrofitting security onto an insecure design rarely works. **Modern skill:** Modern integration architects must think deeply about security, as integrations often become the weak points in enterprise security postures. Embedding Zero-Trust principles from the start means: * OAuth and token-based authentication * Least-privilege access controls at the integration level * Centralized secrets and certificate management When we were building integrations for a healthcare provider, HIPAA compliance wasn’t optional — it shaped like every architectural decision. Security controls at multiple levels were non-negotiable: field-level encryption, audit logging, access controls tied to role and context rather than just credentials. A skilled architect implementing single sign-on for a corporate portal understands not just SAML and OAuth protocols but how to design attribute exchange, just-in-time provisioning and role mapping between disparate systems. I’ve made it a rule to align all OIC integrations with OCI IAM policies from day one and enforce per-integration security policies rather than relying on shared credentials. On one engagement, that decision prevented a significant security incident when a downstream system was compromised — our integrations were isolated, not exposed. ## 6. Observability and business-centric monitoring **What many get wrong:** Monitoring integrations only at a technical level — status, error counts and message volume. **Why this fails:** Technical success does not guarantee business success. An integration that processes every message without error can still fail the business if it processes the wrong messages. **Modern skill:** Implementing business-aware integration observability. This means instrumenting integrations so the operations team can answer questions like ‘Did payroll actually complete successfully?’ not just ‘Were all messages acknowledged?’ I’ve configured OIC activity streams and OCI Logging Analytics for a payroll integration to surface business-level outcomes — completion rates by pay group, exceptions by category (data issues vs system failures vs delays) and SLA tracking and Reconciliation indicators (expected vs processed employee counts). Within weeks, the finance team was reviewing dashboards themselves rather than filing tickets to ask us if the run had been completed. That shift from reactive to proactive operations was transformative while significantly reducing turnaround time, improving SLA adherence and increasing trust in integration reliability. ## 7. Designing for continuous change **What many get wrong:** Assuming integrations should be ‘stable’ and rarely modified. **Why this fails:** Cloud environments are defined by constant change — quarterly SaaS updates, API evolution and acquisitions mean no integration is ever truly done. The mistake many teams make is optimizing for initial stability instead of long-term adaptability. This leads to brittle integrations that break with every release cycle, creating fire drills and eroding business trust. **Modern skill:** Building change-resilient integrations where change is expected, tested and absorbed with minimal disruption through: * Versioned APIs with clear deprecation policies and backward compatibility * Contract-first design so consumers agree on interfaces before implementation begins. schema validation at runtime and test time * Automated regression testing that runs before every quarterly update while validating API responses, business flows and edge cases and failure handling. Before each Oracle ERP quarterly update, our automated test suite validated all critical OIC integrations against the new release in a pre-prod environment. We catch breaking changes weeks before they reach production ensuring seamless business continuity. The peace of mind this creates, for the integration team and for the business, cannot be overstated. Design integrations not for stability, but for evolution — treating change as a constant and embedding resilience through versioning, contract governance, automated validation and decoupled architecture. This shifts integration from a fragile dependency to a durable, adaptable platform capability ## 8. DevOps and automation for integrations **What many get wrong:** Treating integrations as manually deployed artifacts. **Why this fails:** Manual deployments increase risk and slow delivery. They also make audit and compliance conversations unnecessarily painful. **Modern skill:** Applying CI/CD and DevOps practices to integrations — automated deployment pipelines, environment standardization with traceability and version-controlled artifacts as first-class engineering outputs. We promoted integration packages from development to test to production using automated pipelines through CI/CD tools like Flex deploy and Jenkins on a recent engagement. Deployment errors dropped to near zero and audit evidence was generated automatically with every release. The integration team stopped dreading deployments and started shipping faster. ## 9. Business process and domain expertise **What many get wrong:** Focusing purely on technical flows without understanding business context. **Why this fails:** Integrations that work technically may fail operationally. A technically perfect integration built around the wrong business process creates a well-engineered wrong answer. **Modern skill:** Integration architects frequently serve as bridges between business stakeholders and technical teams. This requires translating business needs into technical requirements and explaining technical constraints in business terms — clearly and without condescension. Armed with process understanding, the architect designs integrations that automate entire workflows rather than just moving data between systems. The difference between a data-mover and a process architect is the difference between a cable and a nervous system. On a global HR transformation, I spent the first two weeks meeting the HR operations team gathering requirements and understanding their business processes before writing any integration specifications. By understanding the full hire-to-retire lifecycle — not just the data flows, I designed integrations that ensured consistency across HR, payroll, finance and identity systems in a way that no purely technical analysis would have produced. ## 10. Leadership and enterprise influence **What many get wrong:** Assuming integration architects only need technical authority. **Why this fails:** Integration decisions impact multiple business units and platforms. Without the ability to influence stakeholders and align cross-functional teams, and drive adoption, even the best technical design can stall or fail. **Modern skill:** Acting as a strategic leader, not just a technical expert while bridging the gap between business priorities and technical execution: * Influencing architecture decisions across organizational boundaries * Establishing integration standards and governance frameworks that drive consistency * Guiding multiple delivery teams toward coherent, enterprise-wide outcomes Defining enterprise-wide standards by reducing duplicated integrations while improving audit readiness and compliance. Technical brilliance alone is insufficient if integration architects can’t effectively communicate their designs and decisions. When I document a complex integration architecture, I create multiple views targeting different audiences. For executive stakeholders, I produce high-level diagrams showing how major systems connect and the business capabilities these integrations enable with minimal technical jargon. I focus on conveying business benefits and risk mitigation plans and strategic value of these integrations. For development teams, I provide detailed sequence diagrams, error-handling flows and API documentation with example requests and responses while providing Clear guidance for implementing integrations between various applications. For operations, I write runbooks for common failure scenarios and explain how to interpret log messages and metrics in the context of business outcomes. I provide guidance for proactive monitoring and incident response. Effective architects invest in knowledge transfer — conducting workshops to explain architectural decisions, pairing with developers during implementation ensuring best practices are adopted and creating decision logs that capture why specific approaches were chosen over alternatives. Providing support during initial production rollout, ensuring confidence, reliability and operational readiness. Modern integration architects combine deep technical expertise with enterprise influence — communicating effectively, guiding teams, enforcing standards and ensuring that integrations deliver measurable business outcomes. Leadership in this role means shaping organizational decisions, reducing redundancy and turning integrations into a strategic asset. ## The evolving role: What the next five years will demand The role of integration architects will continue to evolve as technology and business needs change. Artificial intelligence and machine learning are already beginning to influence integration, with intelligent data mapping, automated error resolution, agentic workflows and predictive scaling. Low-code and no-code integration platforms are democratizing integration development, requiring architects to shift focus toward governance, standards and architecture while empowering business users to build simpler integrations themselves. I believe the architects who thrive will be those who treat learning as a core professional discipline, not an optional add-on. That means reading technical research, experimenting with new tools and participating in communities where ideas get challenged. Modern Integration architects design intelligent workflows, automate complex business processes and integrate AI insights into enterprise systems, empowering organizations to achieve faster, smarter and more scalable operations. The fundamental skills that distinguish exceptional integration architects — the ability to understand complex systems, translate between business and technology, design for resilience and scale and continuously learn and adapt — will remain relevant regardless of how specific technologies evolve. Those who master this diverse skill set will continue to play a critical role in enabling enterprises to harness the full power of their technology investments. ## Learning from failure: The habit that separates the best The best integration architects treat failures as learning opportunities rather than events to be survived and forgotten. When an integration outage causes significant business disruption, we don’t just fix the immediate problem. We conduct thorough post-mortems to understand root causes, identify systemic issues that contributed to the failure and implement changes to prevent similar problems. After an integration failure caused data corruption on a project I led, I resisted the pressure to simply restore from backup and move on. We analyzed why error handling didn’t catch the problem, why monitoring didn’t detect corruption earlier and why automated testing didn’t surface the bug and how could recovery and reconciliation be optimized to minimize business impact? We used these insights to redesign error-handling patterns to fail safely and recover gracefully. enhance monitoring with business-aware observability and anomaly detection, expand automated test coverage across all critical integrations and Implement reconciliation and recovery procedures that minimize downtime and data loss. This approach builds resilience, reduces risk and enhances trust across business and technical teams. Six months later, that investment paid off when a similar failure mode was caught in staging rather than production. Successful architects maintain awareness of emerging technologies and patterns. We experiment with new tools, strategies and approaches, attend conferences and webinars, participate in professional communities and read technical blogs, case studies and research papers. Staying current is not optional, it is how integration architects remain relevant, proactive and capable of driving innovation. ## A rare combination The modern integration architect is no longer just a middleware specialist. We are platform strategists, security architects, business translators and technical leaders. Enterprises that invest in these skills build integration platforms that are resilient, secure and scalable. Those that don’t find themselves constantly reacting to failures, upgrades and missed business opportunities — fighting the same fires in every quarterly cycle. Integration architecture is not a purely technical discipline, nor is it purely strategic. It requires a rare combination of deep technical expertise, business acumen, communication skills and the ability to navigate organizational complexity. Those who develop this multifaceted skill set find themselves uniquely positioned to drive meaningful business transformation in an increasingly interconnected digital world. In a cloud-first world, integration excellence is enterprise excellence. **This article is published as part of the Foundry Expert Contributor Network.** **Want to join?**
6 days ago 0 0 0 0
Advertisement