Featured PostWhite Paper

MultiCloud a Reality – Security Considerations for the “In‑Between” World

MultiCloud has moved from a long-term “strategy option” to a present-day operating condition. Enterprises adopt multiple clouds intentionally for resilience, best-of-breed services, compliance needs, and procurement realities but they also end up in multicloud unintentionally through mergers, SaaS adoption, and distributed modernization. Gartner’s 2025 outlook captures the direction clearly: demand for AI/ML is expected to surge so dramatically that Gartner predicts 50% of cloud compute resources will be devoted to AI workloads by 2029 (up from less than 10% today), while the complexity of multicloud is so material that Gartner also predicts more than 50% of organizations won’t get the expected results from multicloud implementations by 2029 because connecting across providers is hard.

This “multicloud by reality” pattern is visible in large-scale government platforms too. The U.S. Air Force’s Cloud One explicitly positions itself as a multi-cloud, multi-vendor ecosystem managed by the Air Force and offered as an enterprise cloud for mission owners requiring commercial cloud services delivered with speed, scale, and security.  Whether the organization is public or private sector, the architectural outcome is similar: workloads, data, and controls are distributed across environments, and the operational model must keep up.

AI accelerates the trend further, not just because it increases demand for compute, but because it raises the stakes around where data lives, how it is governed, and how quickly models and applications can securely access it. Gartner’s press release makes an important practical observation: organizations may “need to bring AI to where the data is” to support the coming surge.  In multicloud terms, that means the “data plane” (where data resides) and the “compute plane” (where models and services execute) may not naturally align inside a single cloud account or even a single provider, especially when global operations, latency, and sovereignty requirements come into play.

Security becomes hardest not inside any single cloud provider, but at the seams what happens “in between” clouds, between cloud and on-prem, and across third-party SaaS integrations. The attack surface expands through persistent connectivity, federated identity, API-to-API traffic, and cross-cloud routing dependencies. Those seams are also where governance can fragment – different parties may own different platforms, different teams may set different policies, and different vendors may deliver different security tools. The result is a predictable blind spot: strong visibility within each environment, weak visibility across the end-to-end transaction path that spans environments.

The industry is beginning to treat “the in-between” as a first-class product category. In December 2025, Reuters reported that Amazon and Google launched a jointly developed multicloud networking service to allow customers to establish private, high-speed links between AWS and Google Cloud in minutes rather than weeks, explicitly framed as meeting demand for reliable connectivity at a time when even brief disruptions can cause major outages.  AWS and Google also described this effort as introducing a new open specification for network interoperability, aimed at reducing the “do-it-yourself” complexity that has historically characterized multicloud connections.  When hyperscalers collaborate to make multicloud connectivity easier, it’s a strong signal that “connecting clouds” is no longer a corner case it is the mainstream architecture.

Operational resilience and cybersecurity are inseparable in this environment because outages, disruptions, and attacks can look identical at the outset. The 2025 AWS outage is a useful reminder of how widely a cloud incident can propagate; AP News reported that the disruption was pinned on AWS’s DNS, and that it impacted a wide range of consumer and enterprise services around the world.  Even when experts assess an incident as a technical failure rather than an attack, the operational response still has to treat the problem as potentially adversarial until validated, because threat actors frequently exploit confusion and degraded visibility during large-scale disruptions.

The “single pane of glass” goal remains valid, but it must be cross-cloud by design. Traditional CSP consoles provide strong tooling within their own boundaries, yet multicloud environments need consolidated visibility across identity, network paths, workloads, and data access patterns. This is also where automation becomes critical: in complex environments, you cannot rely on manual remediation without accumulating operational risk and cost. Reuters described Palo Alto Networks’ Cortex Cloud 2.0 as including a “cloud command center” intended to provide a unified view of cloud risks and threats across services of multiple providers, reflecting the industry-wide shift toward consolidating cross-cloud telemetry and response workflows.  The direction of travel is clear: security teams want fewer fragmented views and more integrated, actionable signals that map to how the business actually delivers services.

Zero Trust is the only scalable security model for multicloud, but it has to be implemented as an operating framework rather than a one-time control deployment. NIST SP 800-207 defines Zero Trust Architecture as an approach that shifts defenses away from static perimeters and toward users, assets, and resources, with the idea that there is no implicit trust granted based solely on location or network placement.  CISA’s Zero Trust Maturity Model provides a structured progression for improving Zero Trust capabilities across pillars like identity, device, network/environment, application/workload, and data.  The DoD has also formalized its approach through a published Zero Trust Strategy, including pillars and a roadmap-oriented execution posture.  In multicloud, these frameworks matter because they are designed to handle the exact conditions that break perimeter assumptions: distributed workloads, hybrid paths, and constant change.

Data-centric security is the hardest part of Zero Trust, and it becomes even harder in multicloud. The core challenge is that access decisions need to be driven by the sensitivity and attributes of the data itself, not by which cloud provider happens to host it. That requires consistent data labeling, tagging, provenance, and access policy enforcement across environments. It also requires treating “AI training data” as a high-value target. GovCIO reported that the Air Force is creating a protected “golden record” of training data to reduce the chances of data poisoning in large-scale projects, emphasizing separate protection and monitoring of that clean training baseline.  This aligns with broader security guidance around AI: OWASP’s GenAI risk guidance highlights training data poisoning as an integrity attack that can compromise model behavior by injecting malicious or biased data.  In a multicloud context, data poisoning isn’t just an AI problem it is a governance and lifecycle problem, because the training pipeline often spans storage, ETL, labeling, model building, and deployment tooling that may live in different environments.

This is where Shivaji Sengupta’s public thought leadership and NXTKey’s content are directly relevant: the emphasis repeatedly comes back to foundations data quality, governance, and operational discipline rather than chasing tools. NXTKey’s content on the role of quality data argues that secure, high-quality data underpins reliable AI outcomes and explicitly calls out threats like data poisoning and the need for integrity protections across the AI lifecycle.  NXTKey’s discussion on “AI transformation from digital transformation” frames AI transformation as building on prior modernization, requiring changes in how organizations manage data, risk, and operations not just adopting a new model or platform.  These themes map neatly onto multicloud security reality: the most powerful multicloud architectures fail not because a cloud is “insecure,” but because governance and data controls don’t scale across boundaries.

Practical multicloud security also depends on governance mechanics that eliminate ambiguity. Multicloud amplifies policy conflicts because different platform owners may control different security settings, different teams may approve different tools, and different vendors may implement different defaults. The security solution is partly technical, but it is also procedural: a consistent onboarding process (“one front door” for integration), a clearly defined set of approved shared services and reference patterns, and continuous configuration validation so drift and misconfiguration are detected and corrected quickly. Policy-as-code and CI/CD-integrated guardrails are increasingly important because the change rate in multicloud environments is simply too high for manual review to keep up without creating unacceptable delays.

The workforce dimension is the multiplier and often the limiting factor. Multicloud demands engineers and security operators who can reason across providers and across layers: identity, network paths, application behavior, and data policy. NXTKey has invested in this workforce pipeline through its Applied Cyber Security Course at Delaware State University, described as a job-applied course taught by practitioners delivering services to federal agencies and designed to bridge academic knowledge with live scenarios.  NXTKey also highlights that it developed and continues to deliver this Applied Cyber Security program to equip students with practical experience using key cyber technologies to protect valuable data assets and anticipate threats.  That practical, scenario-based mindset is exactly what multicloud security needs, because many of the most consequential failures occur in complex, cross-domain situations where ownership is shared and visibility is partial.

Market activity reinforces that multicloud security is a primary battleground. TechCrunch noted that Google positioned its $32B Wiz acquisition explicitly as “multicloud,” framing it as protection across major clouds and even on-prem environments because that’s what customers are already operating.  Bloomberg’s coverage of the Wiz deal similarly emphasized that Wiz planned to offer services to multiple cloud providers, underscoring that even cloud-provider-led acquisitions are being framed around cross-cloud customer reality rather than single-vendor lock-in.  And Reuters reported in November 2025 that Alphabet’s $32B Wiz acquisition cleared the U.S. DOJ antitrust review, a signal that cross-cloud security platforms are now strategically significant enough to draw and survive regulatory attention.

The most durable conclusion is that multicloud itself isn’t the risk; unmanaged seams are. Organizations that treat cross-cloud connectivity, shared identity, data governance, and unified telemetry as first-class design elements can make multicloud an advantage improving resilience, accelerating modernization, and enabling AI at scale. Organizations that treat multicloud as “just more accounts” end up with fragmented visibility, inconsistent guardrails, and brittle operational response precisely when the environment is under the most stress. The path forward is not a single vendor or a single tool, but a consistent architecture standard, a data-centric Zero Trust posture, and a workforce equipped to operate the full end-to-end system across clouds and the “in-between” that actually makes it work.