Water Foresight Podcast

Constitutional AI for Water

Season 5 Episode 4

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 28:12

What if the real power in our water systems isn’t at the pump, but in the list that decides who gets help first? We sit down with Brandon Owens, CEO of AIxEnergy and author of The Cognitive Grid: Artificial Intelligence and the Governance of Delegated Power in Critical Infrastructure, to unpack how AI is already shaping judgment in critical infrastructure—long before a machine flips a switch. From leak detection platforms to asset risk scoring, models now rank what matters, narrowing options and quietly steering scarce crews, budgets, and attention.

Drawing lessons from the power sector’s high‑stakes outages, we explore two fault lines that surface under scrutiny: traceability and legitimacy. Can water utilities reconstruct how a model-bounded choices, preserved alternatives, and handled uncertainty? And even if a model performed as designed, did its design reflect public values, protect vulnerable populations, and respect the right to privacy? Brandon makes the case that real‑time efficiency is not enough; defensible judgment requires a decision trail that regulators and communities can examine and trust.

Enter constitutional governance for water. Brandon outlines a practical framework built on explicit rights—access to essential service and protection from unwarranted surveillance—paired with a separation of roles across Policy AI, Executive AI, and Oversight AI. The result is checks and balances encoded in software: policy constraints that are machine‑readable, operational models that execute within clear boundaries, and oversight that logs, audits, and intervenes when rules or permissions are breached. We discuss how to design traceability into every recommendation, how to keep governance local and adaptable, and why this approach enables faster innovation without sacrificing legitimacy.

If you care about resilient water utilities, ethical AI, and public trust, this conversation offers a path forward: embed governance before automation becomes indispensable. Listen, share with your team, and help shape how our systems decide—while we still decide how they should. Subscribe, leave a review, and tell us what rights you would hard‑code into the water grid.

#water #WaterForesight #strategicforesight #foresight #futures @Aqualaurus

Why The Book Shifted To Governance

SPEAKER_01

This is the Water Foresight Podcast powered by the Aquilaris Group, where we anticipate, frame, and shape the future of water through strategic foresight. Welcome to the Water Foresight Podcast. Today's guest is Brandon Owens, the CEO of AIX Energy. Brandon, welcome to the Water Foresight Podcast. It is a privilege to have you with us today. Thank you, Matt. Pleasure to be here. Well, Brandon, I got to tell you, you you've written a very interesting book, The Cognitive Grid: Artificial Intelligence and the Governance of Delegated Power in Critical Infrastructure. And it caught my attention, not because I'm an expert on the electrical grid, but I thought, well, what about the water grid? You know, where's the love for the water grid? Um, so I wanted to talk to you today about your book and many of the fascinating issues that you raise in the book. And it's a great read. I would recommend that people buy your book and consider it because I found it to be a very great read. I want to understand, and we'll get there, but I want to understand your thesis for a constitutional grid. That's a foreshadow. We'll get there. But you start out your book with a very good discussion of how we've gotten where we are today with artificial intelligence. And you talk about how pilots you know, innovations become pilots and then they become tools, and then they become workflows, and then they become expectations. And that kind of is where the whole book takes off. And tell us a little bit about the reason you wrote the book and walk us through kind of the beginnings, a quick history of how we how we've gotten here with innovations in the world of energy and water.

How AI Reshapes Human Judgment

SPEAKER_00

Okay, uh, fantastic. So yeah, I mean, I started out writing a book about, you know, all the great applications that AI has uh for the electric power system. And as I dug further and further into it, it really became a book about AI governance and the importance of uh making sure that the systems that we are embedding in the power network are contained and properly governed. And so that's how I ended up uh writing about governance. It it's it surfaced as the most important issue uh when I took a hard look at the intersection of AI and the power system. And it's not just power, as you as you uh allude to, it's water, it's it's the health system, it's all critical infrastructure. Um, AI is a fast-moving train, and you can see where it's headed. It's not there yet, but it's headed towards operational applications inside our critical infrastructure. Uh, and and if we're going in that direction, we need to make sure that we do this in a responsible way. Uh, and so in the book, I lay out what I call constitutional governance. Um, and when I say constitutional, I don't mean writing a legal constitution. I mean establishing a clear foundational settlement about where authority lives, how it's exercised, and what makes it permissible before the AI systems themselves become indispensable within the system?

SPEAKER_01

Yeah, yeah. I really enjoyed um your book and how you moved through to that point to discuss that. But let's let's map this out. In my first question is in in practical terms, how do these decision support systems, uh, whether you're in the electric world, the water world, how do these decision support systems change operational judgment even when humans remain fully in control?

Lessons From Power For Water

SPEAKER_00

Right. And and that's a really good place to start because it doesn't happen where most people think. Uh in water utilities, judgment is not about who turns the valve on or signs the order. It's about how the consequences get allocated when everything can't be fixed at once. You know, which main gets repaired first, which leak gets investigated, which anomaly in a treatment plant triggers escalation. AI decision support tools step into that upstream layer. They don't physically control pumps or treatment processes today. But what they're doing is ranking risk, forecasting failures, clustering incidents, and surfacing patterns that humans might miss in massive data sets. But once those rankings become default, uh, the default way that everyone sees things, something changes. You know, the menu of reasonable actions starts to narrow. So a predictive model doesn't just inform, it actually starts to shape attention. Uh and when even when humans remain fully in control, the judgment structure has already shifted because of the results of those AI systems.

SPEAKER_01

Yeah, I I will I will often be caught saying that the the water systems or the water grid is about 10 years behind the electrical grid. And so as I read through your book, I thought this is this is kind of a parallel to where the war the world of water may be in the next 10, 15, 20 years. And how do we think about the electric grid and what it's going through and how these decisions are uh and technology are being embedded uh into uh hardware and other devices. And what's that mean for the future of water? Um, is that fair? Do you see that?

SPEAKER_00

Yeah, it has big implications for the future of water. And in and I I I think um electricity is probably somewhat ahead of the water infrastructure. I don't know if it's it's 10 years, but for a long time we've been relying on sort of forecasting and planning models. And you know, we we've all had those discussions where you're looking at outputs of results and you're sort of scratching your head and saying, you know, why did the model do this? Um you know, that was actually a governance issue that hasn't been made explicit. Um, so that's been going on for a long time. What's changed here is the speed and uh uh how embedded these AI systems are becoming, uh, that it's really increasing the risks of that sort of opaque governance.

Traceability And Legitimacy After Incidents

SPEAKER_01

Well, I was just gonna ask, what from the from the from your experience with the power grid, what governance problems tend to surface only after a major incident? And uh why are they hard to fix retroactively?

SPEAKER_00

Well, look, on power, when there's an outage, um, you know, it's a big deal. You know, there's uh PUC, you know, the public can get involved. Uh lots of questions get asked. Uh, what did you know? What constraints were in place, uh, what options were considered, where did this happen? When did it work, who made the decision? Uh, and in that context, you have two problems that show up. The first is traceability. So if it's an AI system and the prioritization logic is embedded inside analytics without structured logging, clear documentation, it becomes very difficult to reconstruct exactly how a decision was made. Uh, you can describe what happened, but you're not gonna be able to demonstrate exactly how it was bounded. So one problem becomes traceability. The second is legitimacy. Um, even if these models perform as designed, um, people are gonna start to ask: was the design appropriate? You know, were vulnerable populations adequately weighted? Uh, were the escalation thresholds too narrow? What was uncertainty represented honestly? Um, and so there's that legitimacy part uh that's important as well. And so as these decision tools start to mature, they move closer to operational integration. Um, and it starts to become a larger risk and how exactly they're able to reconstruct things retroactively.

Prioritization As Real Authority

SPEAKER_01

Well, from that, you you know, you you argue that prioritization is where the authority really lives, but how does that show up in water systems when you make that leap from electric to water? How does how how does that how does that shape shake out in the world of water?

SPEAKER_00

Well, it's the same thing. You know, if you it's about resources, you you've got limited crews, aging infrastructure, budget constraints, or maybe there's contamination risks. You know, someone has to decide what matters more, you know, which neighborhoods wait, which pipes are replaced first, you know, which anomalies justify response. So, you know, AI tools are already operating at that level. You know, you've got leak detection platforms that rank segments by anomaly likelihood. Uh, you've got asset risk models that score mains by probability and consequence of failure. None of these systems is actually issuing, you know, boil walk boil water advisories or anything like that, but they're influencing where teams look first and where the resources go. So it ends up being uh prioritization of scarce resources is is where these systems are sort of quietly exercising authority and will continually increase that authority over time as they embed it, become embedded in our decision-making frameworks.

Post‑Event Reconstruction Vs Performance

SPEAKER_01

Ah, okay. So you mentioned briefly uh post-event reconstruction. And what does that mean in terms of infrastructure? And why does it matter more than uh uh real-time performance, for example?

SPEAKER_00

So now it's just from the power power grid. So real-time performance is really about efficiency and compliance. You know, did the system stay within certain limits? Did it meet particular service targets, you know, whatever. Post-event reconstruction is different. It's about whether you can explain clearly and defensively how consequential decisions are prepared and authorized. So after a major event, you know, utilities need to show a lot more accuracy. Uh, they need to show that the decision process was bounded uh by public interest constraints uh and all of the mandates that they're under. Now, if AI tools are collapsing all this uncertainty into a single recommendation without preserving alternatives, assumptions, constraint conditions, then sort of reconstructing the decision-making process becomes very difficult, right? It all becomes collapsed in the tool, and the tool just said, you know, do X.

SPEAKER_01

Right.

SPEAKER_00

Uh it it may have optimized correctly according to whatever objective function you you had in there, but you don't know if that objective function was actually aligned with public value.

SPEAKER_01

Yeah.

SPEAKER_00

So in public infrastructure, grid as well as water, sort of the legitimacy depends on your ability to explain those decisions.

SPEAKER_01

Okay. So, you know, we often talk a lot about operational excellence, and we can go headlong and develop all kinds of innovations and see that we're saving money, saving time. But when there's a a foul up or a mistake, can we perform that post-event autopsy? And is there an auditable trail that says, you know, this is where the mistake was made? Maybe it was embedded in the instructions we gave the device or we missed something. And you can go back and look, maybe uh one of your concerns perhaps is if we don't uh arrange these AI-based systems correctly, uh, we may never know. And we may not be able to correct things, prevent things, or or fix things.

SPEAKER_00

Yes, it's it's if I said that correctly. Yeah, and that's exactly right. I mean, my concern is that it gets even if it gets the answer right, you start to lose the ability to understand how you got to that answer. The why. Yes, and then it compounds. Right. And then it becomes untraceable, right, essentially. And as I said at the beginning, this is a fast moving train. Um, it's it's been slow to get AI into anything operational, but at the rate of change, uh it's gonna happen. And it could happen quicker than people realize, even in critical infrastructure.

Risks Of Business As Usual

SPEAKER_01

Yeah, yeah. I want to ask, if if from the foresight perspective, if if a water utility you know doesn't do anything different today, what kinds of problems or scenarios are these utilities most likely to face 10 years from now?

SPEAKER_00

Yeah, that's a great question. Like if they don't do anything and and things continue to move forward uh and they don't implement any governance.

SPEAKER_01

Yeah, business as usual.

SPEAKER_00

Yeah. Well, I think what you'll likely see is increasing difficulty sort of defending decisions uh under regulatory or public scrutiny, as we just talked about. You're gonna start to see inconsistent protection of vulnerable population, um, if objective functions are narrowly defined. Uh, you'll see gradual shifts in risk tolerance as models are tuned for performance rather than public values, and ultimately an erosion of public trust, you know, when harm uh appears to be, you know, an impersonal system outcome rather than a contestable human decision. You know, the the ethical failures that could occur in the infrastructure aren't going to be explosive. They're gonna show up in small increments over time uh and accumulate.

SPEAKER_01

So the I guess the question that I have from that is is really what what is the solution uh for water utilities today if we think about the future in some of these scenarios? And I don't think necessarily you're painting a dystopian picture. I think you're you're you're you're really calling attention to future scenarios that could occur if we are not uh aware of some of the challenges that we might see um that are right in front of us. But what's the solution for water utility today if they they want to be proactive and stay ahead of these uh governance challenges, as you call them?

SPEAKER_00

Yeah, I let me be clear. I I am a big fan of innovation and AI.

SPEAKER_01

Yeah.

SPEAKER_00

And I actually am a champion of using this technology to make our infrastructure, you know, more costly or excuse me, more affordable, uh, more efficient, better on the environment, et cetera. So nothing that I'm doing is saying, let's not innovate, let's not do this. What I'm doing is the is sounding the alarm on the need for governance. And the proposal the the solution I propose is constitutional governance. And basically, here we're assuming that the machines begin to prepare judgment. Uh and and governance cannot remain purely procedural or after the fact. It has to be embedded in the system architecture. Um, and that means that that means three things in my mind. One is a separation of roles. Um, uh, so you have to be able to uh have sort of separation of power uh so the system can can govern itself. And we can go into some more detail there. The system has to have clear boundaries built in uh about what it can and cannot do. And we've talked about traceability, auditability is another issue. The system has to be audible by design. And so if we can get those things uh up and running in these systems as we start to integrate AI, then we're not gonna run into some of these problems that I've identified.

Rights, Constraints, And Privacy

SPEAKER_01

Well, explain a little further about this notion of a constitutional grid. Uh, you you you you know, I uh the lawyer in me appreciates uh anytime we talk about constitutional law, but you're not really talking about constitutional law. It's it's a it's a metaphor, it's a framework, as you've said. Yes. And you talk about rights. Let's talk about the rights first. What what do you mean by rights? I think you have two elements there. You know, you have access and explanation. Talk about those two elements.

SPEAKER_00

Yeah. So again, when I say constitution, I don't I don't mean a legal constitution. I mean uh establishing a clear foundation in identifying where authority lives, how it's exercised, and what makes it permissible.

SPEAKER_01

Right.

SPEAKER_00

So the the the the rights, so that starts with a statement of what rights people have in this system. And in one of them for critical infrastructure, power as well as uh uh water, is is right to access. Uh this is a fundamental right that cannot be violated by the system. And so that needs to be embedded in this constitution um is part of how the system operates and can't be violated. The other is this um, you know, no warrantless um um surveillance. And so the right to privacy um is a key part of any constitutional governance system for critical infrastructure. So the suggestion is, and and you know, there may be others, the suggestion is that that we make sure to clearly identify and embed these rights in the governance system up front. Uh and so these are are are clear lines that can't be crossed.

Separation Of Powers For AI Systems

SPEAKER_01

Yeah. So there's both rights and constraints in your constitutional framework that you offer. And you know, some of those constraints, I I think I heard you talk about, you know, the surveillance. I think uh as an aside, I remember asking one guest on an episode about you know smart toilets. Is my smart toilet going to spy on me or my guests? Right? Yeah. Uh you know, we can we can go down that that uh rabbit hole, no pun intended. But yeah, um there's also um you know uh you know, disconnection authority. You mentioned that, right? Um and and some other things, no collusion. Now, some of those may be uh a bit um outside, maybe outside the water grid as we think of it, but you know, there are values associated with you know who who gets repaired first, right, in a storm, who gets repaired last. You know, and we have as you mentioned, um we have you know social justice, environmental justice, energy justice considerations that in some jurisdictions are mandated by law, others it's uh more policy. But those are features of your constitutional grid. Fair enough?

SPEAKER_00

Yes, that's right. And and I'm not imposing an ethical system. What I'm what I'm uh suggesting is that we make the system explicit, and then we agree because this is these are public goods, we agree through a social forum or a political forum exactly how this system operates and what those rights are, and that there's a core set of principles that we can all agree on about how this critical infrastructure serves.

SPEAKER_01

Yeah. It's it's more of uh transparency, informed consent. Yes. Maybe informed consent. I mean, I know there's regulators perhaps involved in this, but you know, through our elected and appointed officials, there is a sort of implied informed consent. Uh but Yes, make it transparent. Yeah. You know, you had touched on, I want to return to you, you know, you you use the separation of powers uh metaphor to talk about there's a policy AI component, there's a operational AI component, and then um you have an oversight AI component. Tell me how that how that works through uh this metaphor of the constitutional grid. Tell us about those parts.

Local Oversight And Flexibility

SPEAKER_00

Yeah, I mean the policy um so you have different sort of separational powers in the system. And and and just like our uh political system in the US, these are end up being sort of checks and balances. You have the the policy AI component, which really is enforcing the policies that everyone agrees upon. Um the the executive uh uh portion is gonna be executing or operationalizing those policies. And the the judicial uh portion is going to be making sure that everyone's in compliance and and that and that appropriate permissions and constraints are being honored at all times. And so there are different aspects of this constitutional governance approach that need to be implemented, and and one effective way to do that is through the separation uh of powers.

SPEAKER_01

It seems, as you were talking about, the the oversight AI component, that could be, depending on you know your the nature of your utility, it could be a state public utility commission. It could be an environmental regulator, maybe even the FERC, you know, the Federal Energy Regulatory Commission, uh looking at your uh AI embedded infrastructure and reviewing uh on the front end or even on the back end with the with an event um some of the things you're talking about. They could provide oversight to many of the things that you've addressed.

Optimistic Future For Water Governance

SPEAKER_00

Yeah, yeah. I mean, I think it's important that the systems are handled at as local level as possible, whether that's the state or the community level. Yeah. Uh and the rights and the ethics embedded reflect that community. And then also that there's Flexibility to sort of revisit what's embedded in these systems. We don't want to create uh inflexible governance frameworks, and we also don't want to create uh authoritarian governance frameworks as well. So there's some kind of thorny issues that need to be resolved. But the the basic premise here is that if we make this transparent and explicit, then we can understand and see what's going on, agree on what should happen rather than allow all of this to collapse into machine judgment. And as those machines become smarter and faster, that machine judgment starts to happen at a speed that we cannot contest or control if we do not have the governance system already embedded in the arc.

SPEAKER_01

So what is the future of a constitutional grid for water? If you were to sum it up, if you look out 10 years, what is that, what is that going to look like in your mind? Is it is it first, I guess fundamentally, is it is it a possibility? Are we gonna have to deal with a constitutional grid for water? And if so, you know, what's that look like? I mean, is there an optimistic scenario, a collapse scenario, or you know, business as usual? What do you what are some of your thoughts on that?

SPEAKER_00

Well, I mean, you may be able to use the fact that you're a decade behind to your advantage and watch some of these other institutions and sectors kind of grapple with these problems. But I mean, uh uh look, I don't really see this as um changing the nature of institutions or sort of blowing up the system. What I'm talking about is is embedding some fairly straightforward governance software across the board in these systems. So we have the values embedded, so we have the trans uh transparency and audibility in there. And in and so when we start to move AI into the operational core, you know, we're protected. Uh and and the infrastructure is acting in a way that's accordance with our values. So there's a very uh optimistic scenario that we can do this right, that we can get governance right for critical infrastructure, and that we can move forward and leverage the full power of AI for the benefit of everyone.

SPEAKER_01

I don't know if I want to ask you about a dystopian scenario, a collapse scenario, you know, uh, you know, what could happen. But uh I think I think you talk about some of those issues lightly in your book, and uh I'll ask the the listeners to look at the book because you know you do raise some issues about what could happen if uh we do not uh contemplate a constitutional grid. And you you know you lay those out. And yeah, you know, one of the one of the things that um ultimately I come to, and it's it's maybe a uh a loaded question, one that maybe we we we really can't answer on this on this episode, but it comes down to who decides. And if the values are required uh for this AI infrastructure, what are those values? So who decides and what are the values? And so you know that's a whole nother episode, yeah. But it gets into um moves us from a constitutional kind of architecture framework discussion into a a moral ethical discussion on yeah. So and I'm not asking you to answer those questions, but but that's kind of where we end up.

Closing, Book Info, And Links

SPEAKER_00

I I have a short answer to the last one. And it would be when you're talking about you know social goods and critical infrastructure, the short answer is you know, we decide. And and and and my the urgency that I'm trying to bring to the debate is let's make sure that we decide while we still have the power to decide before we hand that power over to the machines to decide for us.

SPEAKER_01

Okay. Well, this has been a fascinating conversation, Brandon. I I have uh probably many days worth of questions that we could go over and talk about. But I wanted to talk to you because this book really, I think spoke to me and others in the world of water who are thinking about what does AI mean? What is it, and how do we harness it in an appropriate way? And so I would encourage you, uh the listener, to get a hold of this new book, The Cognitive Grid, uh Artificial Intelligence and the Governance of Delegated Power and Critical Infrastructure by Brandon Owens. And Brandon, you've been a great guest today. Tell us where folks can get a hold of you and uh where they can pick up your book if they have other questions.

SPEAKER_00

Well, I want to start by thanking you for having me on uh your program and for the opportunity to talk about this stuff, which I love and I'm passionate about. And I'm I'm so thrilled that it resonated with you. I think you're the target audience. Um, you can find the book on Amazon. It's the cognitive grid, uh, but I also have a thought leadership platform, a ixenergy.io, uh, where I talk about governance issues as related to the power system. Uh, and then I also have uh links to all of my books and writings. You can find me there and send me a note.

SPEAKER_01

Wonderful. Well, Brandon, thank you again for being a wonderful guest on the Water Foresight Podcast. A great discussion. And uh we thank you, the listener, for joining us today. And we ask that you uh join us next time on an episode of the Water Foresight Podcast. Have a wonderful evening. Thank you. Thank you for listening to the Water Foresight Podcast, powered by the Aqualars Group. For more information, please visit us at Aqualars.com or follow us on LinkedIn and Twitter.