Clouds
Clouds serve at least three roles in the clean energy transition:
Mediators: Clouds connect grid components together.
Consumers: Clouds consume a lot of energy.
Providers: Clouds—and this is new—increasingly produce energy, in part to support AI growth.
This post will cover each of these roles, if in summary detail.
As I’ve mentioned to a few readers by now, this topic has turned out to be much, much bigger than I expected. I may cover some part of this topic in more detail in the next post. For now, let’s delight in the summary.
A prehistory of the cloud/grid relationship
Until very recently, communication between grid components has not occurred via TCP/IP. Grid components were effectively “airgapped”—the operational technology (OT) that makes grids work was entirely separate from any information technology (IT) code that might manage, say, customer billing logic, maintenance requests, etc.
Sadly for my field (cybersecurity), those days are over. The grid is now, effectively, internet-connected, and at various different points. “Behind the meter,” smart thermostats, home batteries, electric cars, and rooftop solar panels are modulating demand (and supply). (This setup can be way crazier at, say, a factory that makes its own electricity from hydrogen---see Prosumers). In front of the meter, big renewables projects, like solar or battery farms, will typically have some degree of internet-mediated orchestration and automation. (Which can be really, really bad, by the way.).
At this point, people in my discipline will typically say “thanks IoT, computers were a bad idea,” and they wouldn’t be totally wrong. FrostyGoop incident in January 2024 caused 600 apartment buildings in Ukraine to lose heating for 48 hours through malware manipulation of temperature controllers.
But! But. It’s actually really hard to imagine a clean energy transition being successful without these components, for reasons we’ve discussed at length elsewhere.
So, we go into harm reduction mode. What’s a distributed energy resource (DER) manufacturer, or utility operator, to do?
What they’ve always done: Outsource the problem!
Clouds as mediators
Here enter clouds, in their foremost role: as mediators between smart devices. Instead of architecting point-to-point relationships between grid components, it’s much, much easier, and an objectively better security idea, to use clouds instead. Cloud security defaults are just better than what you’re going to roll in-house. (Don’t believe me? Over 46,000 internet-exposed industrial control system devices globally communicate via vulnerable protocols.).
A typical setup might look like this. You’re building, say, a home battery. How will your company monitor/orchestrate all the deployed batteries (for health, firmware updates, etc)? How will end-user’s smartphone apps connect to the batteries? How will the batteries figure out when to turn off and on (charge when grid electricity is cheap, discharge when it’s expensive)? The easiest solution, again and again, is just to wire everything up in AWS or Google Cloud or whatever. Hire a cloud security expert to make sure creds and authz are well-managed, call it a day.
Now imagine every DER manufacturer, reseller, or installer doing the same thing. Obviously, we have an issue here: the cloud and the grid have become extremely tightly coupled. A cloud outage could cause a power outage.1
Market concentration amplifies these risks. AWS maintains 30-33% global market share, Azure holds 20-25%, and Google Cloud 10-12%, meaning the "Big Three" control 60-68% of the total market. In the energy sector specifically, AWS leads with 34% market share. This concentration creates dangerous single points of failure as energy companies become increasingly dependent on a handful of providers for critical infrastructure operations.
Remember: attackers do not need access to all operators to achieve widespread effects. The SUN:DOWN research report (2024), covered previously in this Substack, demonstrated that attackers need to control fewer than 2% of grid-connected inverters to destabilize the European grid!
Clouds as consumers
This point has been so widely made that I hesitate to spend too much time on it, but broadly, data centers are massive energy consumers—accounting for 1% of global electricity consumption today and projected to reach 3-10% by 2030.2 (For this reason, and even before the current AI boom, clouds were often built around energy subsidies. Jenna Burrell (2020) discusses this phenomenon: in rural Oregon, Meta planned a data center in part because it was near a WPA-era hydroelectric utility co-op; lots of town/gown issues, class, labor--very much worth a read for those with an STS bent like me.)
AI is pouring gasoline on this fire. A ChatGPT query uses 2.9 watt-hours compared to 0.3 for a Google search—a tenfold increase. Every AI breakthrough means more strain on the grid.
Which brings us to our next trend: due to their ever-growing hunger for energy, data centers are looking to become producers as well as (instead of) mere consumers.
Clouds as producers
Faced with insatiable energy demands and unreliable grids, hyperscalers have decided to cut out the middleman. Why wait for utilities to build capacity when you can generate your own power—preferably right next door?
The nuclear renaissance is being led not by utilities, but by tech companies. Microsoft signed a 20-year deal to restart Three Mile Island, bringing 835 MW of carbon-free power exclusively for its data centers. Amazon purchased a data center campus directly connected to the 2.5 GW Susquehanna nuclear plant. Meta’s got one in Illinois.
This model already blurs the line between “data center” and “energy producer.”
But these deals rely on existing nuclear facilities. There are only so many of those. So, armed with more money than any companies in history, why not… build your own power facility?
The model? "Bring Your Own Power" data centers that operate as energy islands, connected to but independent from the grid. There’s a whole company looking to specialize in building power on-site for data centers. (Why wait for interconnection queue?). Google’s partnering with Kairos Power to build small modular reactors specifically for its facilities. In Ireland, Microsoft is testing hydrogen fuel cells.
This shift fundamentally changes the cloud-grid relationship. Data centers aren't just consumers or even prosumers—they're becoming full-fledged utilities. And to do, they’re building large-scale power generation facilities that were never designed to be part of “the grid” as we know it. But that won’t stop them from, you know, becoming part of the grid. Some are exploring selling excess power back to the grid during peak demand periods.
Risks abound where the three roles collide
What could possibly go wrong?
For one thing, the geographic concentration of data center facilities creates “perfect storm” conditions. For reasons, a tremendous proportion of all the world’s computing power is agglomerated in Northern Virginia. Northern Virginia's data center cluster consumes so much power that Dominion Energy supplies 20% of Virginia's electricity to these facilities. One region! One-fifth of a state's power!
As I’ve discussed at great length in other places, this kind of concentration is not great for the internet’s stability. But it’s also bad for the grid. What happens when the power hiccups and the cloud goes down? (Power-related outages cause 43-50% of all data center failures.) Well, now the grid might go down too.
This whole situation can be quite a pain to recover from. If the internet is down because the grid is down, which is down because the internet is down... Like, from a recovery perspective, where does the internet end and the grid begin? No one really knows. So that’s bad.
Obviously, climate change compounds these vulnerabilities. The July 2022 UK heatwave forced Google and Oracle London data centers offline when temperatures reached 105°F, exceeding 90°F design limits. Most facilities cannot sustain operations above 90°F, which is a problem, because there will be lots more days over 90°F in the future than there have been in the past.
What if we started regulating clouds like the parts of the grid they are?
Imagine: A summer heatwave pushes California's grid to its limits. At the same moment, a nuclear-powered data center is humming away. What if that data center could instantly shift its workloads to wind-powered facilities in Texas, then send its nuclear capacity to California’s grid?
So: what if we started regulating clouds like the parts of the grid they are?
What’s in it for the companies? Federal dollars for their energy projects!
What’s in it for the rest of us is a bit more obvious. We get more grid capacity, ideally clean, and hopefully subsidized at least in part by the data centre operators’ profit motives. And (for the tech-policy-is-industrial/competition/defence-policy set) we get an additional regulatory lever over data centers—the very material heart of AI, and tech writ large.
There is some precedent to this idea. Wierman et al. (2020) demonstrated that data centers can provide frequency regulation through Dynamic Voltage Frequency Scaling, achieving 3% additional regulation capacity. Google's carbon-intelligent computing platform and Microsoft's grid-interactive UPS trials represent practical implementations of these concepts.
Rather than treating these dependencies as inevitable risks, we can transform cloud providers from passive energy consumers into active grid partners. Here are three high-impact policy interventions:
1. Make Hyperscale Campuses Dispatchable Flexible Loads
The Opportunity: Data centers represent enormous, controllable electrical loads that could provide real-time demand response and peak-shaving services.
Policy Framework: Fold large data-center clusters (>10 MW) into the “must-offer” category for flexible-load aggregations under FERC Order 2222 or its Canadian/EU equivalents. This would oblige them to register curtailable capacity and respond to operator signals during grid stress.
Incentive Structure: Guarantee that verified curtailments can earn the same ancillary-service payments that generators receive. This creates revenue streams that offset the operational complexity of demand response programs.
Technical Feasibility: Google is already testing carbon-aware schedulers that relocate batch and AI-training workloads based on grid conditions. The infrastructure exists—we need policy frameworks to scale it.
2. Turn Backup Batteries and Fuel Cells into Grid Assets
The Opportunity: Data centers maintain massive battery backup systems that sit idle 99% of the time. These could provide fast frequency response, spinning reserve, and black-start capability.
Policy Framework: Require new UPS and on-site battery systems to be bidirectional and certified for grid support (e.g., IEEE 1547 Category B). This ensures that backup power investments serve dual purposes.
Incentive Structure: Let providers monetize these resources in capacity or balancing markets. Microsoft's “grid-interactive UPS” pilot shows the technical path—policy should create the market path.
Scale Potential: Consider that Northern Virginia's 300 data centers likely represent gigawatts of backup battery capacity. Converting even a fraction to grid-interactive resources could significantly enhance regional grid resilience.
3. My Favorite: Price Compute Where Excess Renewables Are, Not Where Demand Is
The Opportunity: Spatial load-shifting away from constrained grid nodes could dramatically improve renewable energy utilization and grid efficiency.
Policy Framework: Impose location-based carbon fees—a surcharge on MWh consumed in high-carbon or capacity-constrained zones. This creates market signals to relocate energy-intensive operations.
Incentive Structure: Offer tariff discounts for relocating batch or AI-training workloads to over-supplied regions. When renewable curtailment is imminent, electricity should be nearly free for data centers willing to absorb excess generation.
The Path Forward
The convergence of cloud computing and clean energy is accelerating whether we plan for it or not. Climate projections suggest design parameter exceedance will increase 200-300% by 2040, with cooling energy requirements rising 15-27% in affected regions. We cannot afford reactive policies.3
The three interventions outlined above—dispatchable loads, grid-interactive storage, and location-based pricing—represent starting points, not endpoints. Success requires acknowledging that cloud providers are becoming critical infrastructure operators whether they want to be or not. The question is whether we'll create policy frameworks that harness this transformation for grid resilience and clean energy acceleration, or whether we'll stumble into a future where our most critical systems depend on uncoordinated private actors with misaligned incentives.
As we've seen throughout this blog series, the clean energy transition creates new categories of risk that our existing frameworks struggle to manage. The cloud-grid convergence may be the most consequential of these challenges—and our biggest opportunity to get security and policy right from the start.
Thanks to colleagues at Berkeley's Center for Long-Term Cybersecurity and the Institute for Security in Technology for discussions that shaped this analysis. Special thanks to readers who provided regulatory insights.
What did we miss? Let us know: ffff@berkeley.edu.
Mohsenian-Rad and Leon-Garcia (2010) introduced the first comprehensive framework for coordinating cloud computing with smart power grids. Building on their work, Kim et al. (2011) developed distributed algorithms for cloud-based demand response, showing exponentially fast convergence rates that made real-time grid management feasible.
Security emerged as a critical concern as early as Bera et al. (2014), who identified critical vulnerabilities in smart grid cloud integration. Their work demonstrated that energy information exchange creates novel attack vectors during intrusion events. In response, Baek et al. (2015) introduced a secure cloud computing framework specifically for smart grid big data management. Their work highlighted the fundamental tension between the benefits of cloud computing (scalability, cost efficiency) and the security risks (expanded attack surface, data sovereignty). A more recent review by Ali et al. (2024) categorizes smart grid cyberattacks into four primary vectors: cloud misconfigurations, compromised credentials (affecting 86% of breaches), insecure APIs, and supply chain infiltration.
About compromised credentials… IT/OT convergence has created particular concern, as it eliminates traditional airgap protections. Research by MDPI (2024) on cloud-based SCADA systems reveals that 80% of operational technology incidents now originate from IT system compromises. That means cred compromises can turn out to be much more impactful: they could allow remote attackers access.
Lawrence Berkeley National Laboratory (2024) reports that U.S. data centers consumed 176 TWh in 2023 (4.4% of total electricity), projected to reach 325-580 TWh by 2028. The IEA's analysis (2024) extends this globally, projecting data center demand to exceed 945 TWh by 2030.
Wonkish coda: Regulation has not kept pace with changes to the cloud/grid relationship. New NERC CIP standards effective January 1, 2024, allow storage of medium- and high-impact Bulk Cyber System Information in the cloud but "do not provide clear guidance" on implementing cloud technologies. Read: gaps in control requirements for high-impact assets.
The patchwork of regulations creates compliance nightmares. The EU's NIS2 Directive became effective October 18, 2024, imposing penalties up to €10 million or 2% of global turnover for essential entities. China maintains the strictest approach with comprehensive state control through the 2024 Network Data Security Management Regulations. Multi-jurisdictional data storage creates fundamental conflicts between regulations like GDPR and the US CLOUD Act.
Any policy framework must address the security implications of deeper cloud-grid integration. We need mandatory disclosure requirements, like the EU Energy Efficiency Directive's new KPIs: PUE, water usage, hourly carbon intensity, and flexible-load capacity—all reported to regulators and made machine-readable.
More critically, we need "NERC-CIP-for-Cloud" standards. Any cloud platform that hosts grid-control workloads must comply with CIP-013 supply-chain requirements and provide Software Bills of Materials (SBOMs) on request. In exchange, certified providers should receive privileged access to CISA's Joint Cyber Defense Collaborative data feeds and liability protections for sharing incident telemetry quickly.