Discussion about this post

User's avatar
Jürgen Barthel's avatar

In the AI development, "conflict" is a very big topic, as to recognize and overcome "autogressive commitment", starting off "on the wrong foot". Where we (they mostly) compare to human behavior by the way.

Experience is very important, but you mentioned a valid approach: crew agents. But I also learned crew agents being just that. Agents tasked with something. So technically, crew agents are the 10-year-old summa-cum-laude graduates discussing. What "agents" _can_ focus on is to verify the relation of the solution proposed to "ragged" experience. And "warn" the answering AI if it starts on low probability. But (and that was my objection) we talk about crisis decision making. The 2am crisis, when suddenly all the rules went bye-bye.

Still - given the data and information _is_ there (the other bottleneck), and what information _is_ available about the crisis, AI could evolve from the junior ops to an ops manager. Not (at least not yet) on decision making level, but i.e. computing the three best options and presenting them, such saving valuable time.

Then the changed thinking becomes interesting that I work on. If a "learning AI" orchestrates the team, the results can be weighted like a human mind would. Still... Even on a GPU-powered cluster, this takes time. They did a "traffic scenario" (accident, traffic jam) at the executive ops (emergency central), still needed several minutes to analyze, compile, discuss, also while the crisis didn't stop, sit back and wait, but evolved ... then at a point voicing three suggested action paths. And in about 20% of the cases overruled by human experience, "decomplicating" the suggested ideas. But, coming to better results having the three "suggestions". And. 80% the AI "team" was right. And in how many cases would the human emergency operators have been wrong, selecting a suboptimal angle of attack?

So "crew"-based "teams" are improving the results. But they need the ability to learn, not being "experts", but evolving from (10-year-old) junior ops to "16-year-old" valuable ops team member? And maybe further?

The other: We talk about 20 disconnected systems. A solution can be to create agents or teams for each of those systems, then merge them into an "ops team". But that is not just an infrastructure issue (you need a lot of GPU power for such idea), but also a complexity-issue of it's own. As yes, you have conflicts between the different teams. Not just one "agent" against the other, as for each system, you need a team to create and resolve the internal conflicts. Then a smart (enhanced) "manager team" that compiles the information, questions, weighs. Again, even with a lot of GPU, it ain't instant. But in a crisis, I also learned that while some shots come "automatic", especially when the sh** hits the fan, to step back "a minute", assess and discuss is good practice?

So my advise: No disqualification, but a shifting understanding AI not as an omniscient panacea, but a highly competent and learning "expert" (of multiple AI). AI multiplies. Good if good information is the base, bad if it's garbage...

My guess: To create the "ops team member", we would need someone to invest several hundred thousand into a GPU-power-rack. Not on Amazon AWS or OpenAI. Have 3-10 AI teamed up "per system" (modern system you need some 8-24GB VRAM per agent). 20 systems, start with one area. Then define the roles, the conflict resolution, etc. Prototype. At a point you will recognize you don't need a COTS-LLM (commercial off-the-shelf), but need to invest in a custom one ("occ-ai"?). We quickly talk about an investment of beyond a million (it get's cheaper slowly).

Intuitively "too expensive". For "just another ops staff". But I guess taking step 1 is a viable investment. Step 2 will justify if step 1 proves the concept. And I've seen more expensive software or technology investments in aviation. We have the power. But we must open our minds and understand the goal. No panacea. But a fast, competent member for any ops team.

And no, it won't replace human in the loop, but it may reduce the team size by taking out the friction and the repetitive adjustments from disconnected systems. Used right, it will improve the team. Not a tool. But a team member.

But the main question: "Where do you want to go today?" Yes, Bill Gates, 1995. This is pioneering times. But we misunderstand AI capabilities as much as its role. So we run, but without a clear and sound direction. A lot of "activism" on "hype", not on "what is it, what can it become".

I had a Windows tablet, touchscreen, reflective display (full sunlight readable). Tablets are standard today, reflective display isn't. What's an AI?

2 more comments...

No posts

Ready for more?