X

The prompt is not the problem

We are in danger of building a society full of architects with nobody left to pour the concrete.

A leader opens an AI tool and types: Create a three-year digital transformation roadmap for our business. Within seconds, the model returns something polished and plausible. It has workstreams, milestones, governance, risk themes, and a neat sequence of priorities. On the surface, it looks like progress.

And, in one sense, it is. AI is making it easier than ever to operate at a higher level of abstraction. Strategy can be drafted faster. Plans can be framed more quickly. Analysis that once took days can now be produced in minutes.

The prompt is not the problem. The missing capability underneath it is.

Because a roadmap is only ever the visible layer. Beneath it sits everything that determines whether it can be delivered at all: the people who understand how the business actually runs, the operators and engineers who know the systems underneath, the practitioners who can trace dependencies, diagnose failures, recover services, and turn strategy into working reality. Those layers have not disappeared just because the interface has become smoother.

This is the risk I think many leaders are underestimating. As AI makes abstraction easier, faster and more accessible, it also increases the temptation to live higher up the stack. More people can now produce plans, frameworks and strategic outputs without first spending much time in the operational layers underneath them. That may feel efficient in the short term. But over time it raises a harder question: what happens if we get better and better at generating direction, while getting worse at growing the people who know how to build, operate and recover the systems that direction depends on?

That is not just a technology question. It is a workforce design question. It is a succession question. And increasingly, it is an operational resilience question.

The missing capability beneath the abstraction

AI is not the enemy in this story. In many ways, it is an extraordinary accelerator. It can help people move faster, see patterns earlier, draft more clearly, and lift the quality of work that once depended on much slower manual effort. Used well, it should make organisations more capable, not less.

But abstraction has a hidden cost when it grows faster than capability underneath it. Every higher layer still depends on someone understanding the layers below. The more polished the surface becomes, the easier it is to forget that the surface is still resting on systems, infrastructure, operations, workflows, trade-offs, and people who know how to make all of it function in the real world.

As Figure 1 shows, the strategic and AI layers are only the visible top of a much larger operating stack. The higher an organisation moves into abstraction, the more it still relies on people who understand systems, platforms, operations and infrastructure underneath.

That is why this article is not really about prompting. It is about workforce design. More specifically, it is about what happens when businesses become very good at producing plans, frameworks and strategic outputs, while becoming less deliberate about backfilling the operators and engineers who make those outputs real.

For business leaders, that creates a problem that goes well beyond talent acquisition. If the lower and middle layers of capability are not being built properly, succession starts to weaken. Future leaders have less grounding in how the business actually works. Technical judgment at the top becomes shallower. Key people become harder to replace. And when something goes wrong, too few people know how to diagnose, adapt and recover without escalating everything back to a shrinking group of experienced operators.

This matters because the strongest leaders are rarely formed in abstraction alone. They are usually shaped by exposure to the work itself. They have seen systems misbehave, watched projects stall under real constraints, learned how users actually operate, and developed enough practical context to tell the difference between a good-looking plan and one that can survive contact with reality. Strategy matters. Architecture matters. AI will matter even more over time. But all of them become more valuable when they sit on top of real operational understanding, not in place of it.

The real challenge, then, is not whether AI will change work. It already is. The challenge is whether organisations will redesign work in a way that still grows the next layer of people who can build, operate, troubleshoot and lead. Businesses that get this right will not just be more efficient. They will be more resilient, easier to scale, and better prepared to develop leaders from within rather than continually searching for finished products from outside.

What shrinks when the practical layer disappears

This pattern is not unique to technology, and it is not confined to cybersecurity. Australia has already lived versions of it in more visible parts of the economy.

Take manufacturing. Australia’s last domestic car manufacturing plants closed in 2017. That did not mean Australia stopped designing, coordinating, importing, financing or consuming vehicles. It meant something more subtle, and more consequential: a practical layer of domestic capability disappeared, and with it a body of experience, operational knowledge and industrial depth that is slow and expensive to rebuild once lost. Manufacturing still matters materially to the Australian economy, employing roughly 930,000 people and contributing around $137 billion in value-added output in 2025, but the lesson is not about nostalgia for an older industrial model. The lesson is that when the practical layer shrinks, resilience shrinks with it.

Figure 2 extends this idea beyond technology. Whether the context is manufacturing, fuel security, technical workforce depth or organisational operations, the pattern is similar: as dependence on abstraction rises and local practical capability declines, resilience risk increases.

Fuel security offers a similar reminder. In March 2026, the Australian Government temporarily released up to 20 per cent of mandated petrol and diesel stocks under the Minimum Stockholding Obligation to ease local market pressure. Reuters also reported at the time that Australia imports about 90 per cent of its fuel. Again, the point is not that global supply chains are inherently bad, or that every layer of capability must be held domestically at all times. It is that dependence becomes more visible when disruption arrives. The more distant the critical capability, the less room there is to adapt quickly when conditions change.

That same principle applies inside organisations. A business can become highly capable at planning, coordination, reporting and strategic design while becoming thinner in the layers that actually build, operate, repair and sustain the systems underneath. It can look sophisticated on the surface while becoming more fragile below it. And because that erosion happens gradually, the risk is easy to miss until something important breaks, a key person leaves, or a major transformation effort reveals how little operational depth is actually sitting beneath the roadmap.

AI is now accelerating this same shift inside white-collar and technical work.

The opportunity is obvious. More people can contribute to planning, analysis and structured decision-making. The danger is quieter. As abstraction gets easier, the temptation grows to assume the practical layer will somehow look after itself. But capability rarely backfills on its own. If businesses and institutions are not deliberate about how operators and engineers are formed, supported and progressed, they may discover too late that they have become better at describing change than delivering it.

Where judgment is actually formed

Part of why I care about this issue is that I did not learn my own trade from the top down. A lot of the judgment I rely on now was formed much earlier, working through real problems in real environments where systems broke, services failed, constraints were non-negotiable, and the answer was not sitting neatly in front of me.

Some of the most formative learning came from troubleshooting infrastructure and server issues where something important had stopped working and the only way forward was to think clearly, trace dependencies, isolate variables, test assumptions, and restore service. That kind of work teaches lessons that are difficult to replicate in theory alone. You learn how changes ripple. You learn how fragile “obvious” assumptions can be. You learn that documentation helps, but lived familiarity with a system often matters just as much when the pressure is on.

“A lot of what later looks like strategic judgment is accumulated operational scar tissue.”

Over time, that kind of exposure widened into a broader understanding of the layers modern organisations actually depend on: infrastructure, identity, enterprise systems, networking, business platforms, security, and the operational realities that sit between them. A lot of what later looks like strategic judgment is really accumulated operational scar tissue. It comes from having seen enough of the underlay to recognise what is realistic, what is brittle, what is missing, and what will likely fail once a plan collides with the real world.

That is why I am wary of pathways that move people too quickly into abstraction. Higher-level thinking matters. It always has. But it becomes much stronger when it is built on top of real exposure to how things are actually run, supported, recovered and improved.

The data is already pointing in this direction

This concern is not just philosophical. We can already see the shape of it in hiring patterns, graduate readiness, and the way early-career work is being redesigned.

Figure 3 illustrates the workforce dynamic that the data now points toward: fewer formative entry points, fewer chances to build judgment early, and a weaker pipeline into the middle and senior layers later.

One part of the problem is that too many people are arriving at the workforce with unrealistic expectations about where capability is built. In a market saturated with messaging about AI, strategy, digital transformation and high-value knowledge work, it is easy for early-career professionals to imagine the path starts close to architecture, advisory or orchestration. In reality, many of the strongest careers still begin much lower down, in support, operations and hands-on delivery, where people are exposed to the practical layers real businesses actually run on.

That gap between expectation and reality is showing up in employer sentiment. A 2025 survey from Hult International Business School and Workplace Intelligence found that while almost all leaders reported difficulty finding talent, 89 per cent said they avoid hiring recent graduates, with lack of real-world experience the most common reason cited. That does not mean graduates are incapable. It suggests many are entering the market without enough exposure to how work actually functions inside live organisations, where systems, users, dependencies and operational constraints are rarely tidy.

At the same time, the lower rungs of the ladder are becoming harder to access. A Stanford Digital Economy Lab paper found meaningful early-career declines in employment within AI-exposed occupations, especially among workers aged 22 to 25. SignalFire’s 2025 State of Talent report also described entry-level hiring in tech as collapsing, while other labour market analysis has shown particularly steep declines in highly AI-exposed entry-level roles. The overall message is not that AI is eliminating work wholesale. It is that the apprenticeship layer is under pressure, and the pressure is falling hardest on the people who most need formative exposure.

Employers themselves are also telegraphing the direction of travel. The World Economic Forum reported in 2025 that 40 per cent of employers expected to reduce workforce size where AI can automate tasks, even while anticipating growth in other AI-related capabilities. In Australia, the Reserve Bank has similarly noted that firms expect AI and technology investment to be labour-saving and productivity-enhancing over time, while also changing the shape of the workforce and the skills required within it. That is not inherently bad news. But it does raise a sharper question: if AI makes it commercially rational to automate or compress more of the lower-level work, who is still being given the reps that used to build judgment?

That question matters because capability is not formed only through courses, credentials or clean strategic work. Much of it is formed through repeated contact with real systems, imperfect information, small failures, constrained decisions and guided responsibility. If entry-level work is redesigned purely for efficiency, without preserving enough of that learning layer, organisations may end up with people who can produce polished outputs earlier, but who take much longer to build the deeper understanding needed to operate, adapt and lead.

Why hands-on exposure still matters

If the labour-market data tells us the lower rungs of the ladder are under pressure, the next question is why that matters so much. The answer sits in how capability is actually formed.

Curiosity is often treated as a personality trait, something nice to have around innovation or learning. In practice, it is far more important than that. The OECD describes curiosity as the desire to acquire new information, resolve gaps in understanding, and explore the unknown, and links it directly to exploratory learning behaviour. That matters because technical judgment rarely comes from being handed the answer in a clean, finished form. It grows through contact with uncertainty: following the trail of a problem, testing assumptions, seeing cause and effect, and gradually learning how systems behave when reality does not match the plan.

Figure 4 captures the cycle through which real capability tends to form: curiosity leads to exposure, exposure leads to experimentation, experimentation leads to troubleshooting, and repeated reflection turns those experiences into judgment.

That is also why hands-on learning remains so important in technical and operational work. A 2024 systematic review of informal STEM learning found that the strongest learning environments consistently feature inquiry, problem-based and project-based learning, design activity, and hands-on participation. Those are not just educational preferences. They are mechanisms for building understanding. In workplace terms, they map to troubleshooting, guided ownership, real system exposure, and the kind of small, imperfect problems that force people to think rather than simply follow a script.

The implications for the workforce are significant. If early-career people are increasingly shielded from the layers underneath the polished interface, then some of the most important forms of learning become harder to access. They may still learn terminology. They may still produce impressive-looking outputs. They may even become productive quickly with the help of AI. But there is a difference between producing an answer and understanding the system that the answer depends on. That deeper understanding is often built through repeated exposure to ambiguity, friction and trial-and-error.

This is where necessary friction matters. Not chaos. Not burnout. Not the kind of inefficiency that wastes people’s time without teaching them anything useful. The point is that some degree of struggle is formative. It is what teaches people how to diagnose, how to recover, how to spot weak assumptions, and how to build intuition about what is really happening underneath a process or technology stack. If every rough edge is removed too early, some of the learning disappears with it.

In technical environments, that often means the most valuable early lessons come from exactly the kinds of experiences that can look low-status on paper: tracing why a service is failing, understanding how access decisions ripple through a business process, working through backup and recovery assumptions, or discovering that the system described in a diagram is not quite the system people are actually relying on day to day. Those moments are not distractions from professional development. They are often the beginning of it.

The proving ground we are redesigning away

One of the quieter risks in this shift is that the very parts of work most likely to be automated or smoothed out are often the same parts that historically built practical judgment. Support and operations roles sit at the centre of that problem.

These roles are often undervalued. They are treated as lower-status work, something to move through quickly or remove altogether once better tooling, better process, or better automation becomes available. In one sense, that instinct is understandable. A lot of repetitive support work should be improved. A lot of low-value manual effort should disappear. But there is a difference between removing repetitive work and removing the learning layer. That distinction matters more now than ever.

Support and operations are where many people first learn how organisations actually function. They see what users do, not just what policies say they should do. They see how identity decisions affect access, how misconfigurations ripple into outages, how undocumented dependencies emerge, and how quickly an apparently minor issue can become a business problem. They also learn the disciplines that later matter at every level of leadership: how to ask better questions, how to narrow a fault domain, how to communicate under pressure, how to escalate properly, and how to distinguish noise from signal. Those are not side benefits of operational work. In many cases, they are the foundation of later engineering, architecture and leadership capability.

That is why it is risky to redesign entry-level work purely around efficiency. If early-career roles are stripped back to observation, coordination and tool-assisted output, the organisation may gain short-term speed while losing some of the very conditions that help people build judgment. Research on generative AI at work shows this tension clearly. AI can improve the productivity of novice workers substantially, which is a genuine opportunity, but that also means people can produce stronger-looking output earlier in their careers without necessarily building the same depth of understanding underneath. The danger is not faster learning. The danger is mistaking faster output for deeper capability.

The better question for leaders is not whether support and operations should stay exactly as they were. They should not. The better question is what those roles should become in an AI-shaped workplace. My view is that the goal of automation should be to remove repetitive work while preserving the diagnostic, judgment-building work that teaches people how systems really behave. Early-career roles should still involve real troubleshooting, guided ownership, and exposure to live environments. Otherwise, businesses risk smoothing away the very proving ground that used to turn juniors into trusted operators and, eventually, into credible leaders.

What gets lost when the ladder disappears

For many leaders, this issue only becomes visible once it starts to hurt. Until then, the organisation can still look capable on paper. The strategy exists. The senior titles are filled. The transformation roadmap is in motion. The reporting looks polished. But beneath that surface, something more important may be thinning out: the internal pathway through which future operators, engineers and leaders are formed.

Figure 6 summarises the three business costs of hollowing out the pathway beneath the strategic layer: weaker succession benches, greater operational fragility, and a more top-heavy workforce.

The first cost is succession depth. Businesses often say they want leaders who understand the organisation, who have credibility with the people doing the work, and who can make good decisions under pressure. Those qualities rarely appear in a vacuum. They are usually built over time, through exposure to the work itself, through seeing how systems behave in the real world, and through learning where theory and reality diverge. That is why the strongest future leaders are often not the people who started closest to strategy. They are the people who grew up inside the work, learned the business from the inside, and developed the trust that comes from knowing what good actually looks like in practice.

The second cost is operational fragility. When something important breaks, polished outputs and abstract frameworks only go so far. At that point, the business needs people who can diagnose, adapt and recover. The problem in many organisations is that too much of that capability sits inside a very small number of long-tenured operators and engineers. There is often one infrastructure person, one senior platform person, one trusted operator who “just knows” where the hidden dependencies are, what the system quirks look like, and which recovery path is most likely to work when the documented process stops short. Much of that knowledge is tacit. It lives in judgment, pattern recognition and accumulated experience more than in diagrams or runbooks.

That creates a classic key-person risk, but one that is often underestimated because the organisation mistakes documentation for capability transfer. Documentation matters. It should be improved. But many of the most useful operational insights are only really absorbed through doing the work alongside experienced people, owning small but real pieces of responsibility, and seeing the same kinds of problems enough times to build intuition. If the next layer of staff is not getting that kind of guided exposure early, the business is not actually reducing its dependency. It is simply delaying the moment when that dependency becomes obvious.

AI can intensify this risk if organisations are not careful. If more of the day-to-day work is handled by automation before newer staff have built the underlying intuition themselves, then some of the tacit learning that used to occur through repetition may never happen at all. The result is a strange kind of capability gap: the business may appear more efficient, but the bench underneath its key operators is thinner than ever. Productivity rises, while transferability weakens. That can look like progress until the experienced person leaves or a serious outage exposes how little judgment has really been backfilled. The broader evidence around early-career work and AI supports that concern, especially where junior pathways are shrinking and novice output is being boosted faster than deeper understanding is being formed.

The third cost is workforce stratification. As AI makes high-level output easier to produce, the market naturally places more attention on people who can frame, coordinate, present and direct. Those are valuable skills. The problem is not that they matter. The problem is that they can begin to crowd out the quieter, slower, more formative work that produces the people capable of turning plans into reality. Over time, that can leave organisations with a top-heavy shape: more planners, more coordinators, more strategy language, and too little practical depth in the layers that actually keep systems running.

This is where the business risk starts to become social risk as well. If too many people are encouraged to aim straight for the abstract layer, and too few are being formed in the operational middle, then the whole pipeline becomes distorted. Businesses struggle to find the kind of grounded future leaders they say they want. Technical teams become more dependent on a shrinking number of experienced people. And society ends up signalling that the most valuable work is the work furthest removed from the machinery, even as that machinery still needs to be built, supported and recovered by someone. That is not a sustainable long-term model for resilience, succession, or leadership quality.

Why strong specialists rarely start narrow

One of the easiest mistakes in modern workforce design is to confuse specialisation with separation. We increasingly talk about architecture, cybersecurity, AI, cloud, data and governance as if they are self-contained disciplines that can be mastered largely from within their own conceptual boundaries. In practice, the strongest specialists are often the people who understand more of the adjacent layers than their title suggests.

That is especially true in architecture. Too many people are being encouraged to think like architects before they have spent enough time understanding what is actually being built, run, supported and recovered underneath. The result can be elegant-looking target states and well-structured recommendations that are poorly grounded in delivery reality. Good architecture is not just pattern recognition at a high level. It is judgment about trade-offs, dependencies, sequencing, operational burden, and the difference between what is theoretically possible and what an organisation can actually sustain.

Cybersecurity is an even clearer example. I often describe cyber as a second-language discipline, not a primary one. By that I mean the strongest cyber practitioners are usually fluent in something real underneath first. There is no single right starting point. It might be infrastructure, identity, systems administration, networking, software, platforms, operations, or another operational domain. The point is not that everyone must follow the same path. The point is that cyber tends to become much stronger when it is built on top of practical fluency in at least one real part of the environment it is meant to protect.

That matters because too many cyber pathways are becoming detached from the enterprise systems they are supposed to secure. It is possible now to learn a lot of security language, frameworks, controls and attack concepts without building much familiarity with identity systems, Windows and endpoint environments, Microsoft 365, networking, backup and recovery assumptions, business platforms, or the messy operational reality that makes risk real inside a business. Research on cybersecurity education has already pointed to gaps between what graduates are learning and what industry actually needs, particularly in areas tied to operations and real-world implementation. That gap should concern leaders, because security advice becomes weaker when it is not grounded in the systems, users and dependencies that make the advice meaningful.

The same principle increasingly applies to AI-related roles. Too many people are being sold AI careers as though prompting is the core skill, when the deeper value often sits elsewhere: understanding the data, integrations, governance, workflows, operational constraints and business processes that sit underneath the model layer. The strongest AI practitioners will not simply be the ones who can generate outputs fastest. They will be the ones who can connect those outputs back to enterprise reality, challenge weak assumptions, recognise hidden dependencies, and translate abstraction into something that can be implemented responsibly and maintained over time.

Seen this way, broad foundations are not in tension with specialist excellence. They are often what make specialist excellence possible. The strongest specialists are rarely narrow in the way their job title implies. They have enough operational grounding to understand adjacent layers, enough delivery exposure to know where plans tend to break, and enough practical context to make better decisions later when the work becomes more strategic. That is one reason businesses should be careful about pathways that push people too quickly toward specialist identity without enough contact with the systems and realities those specialties ultimately depend on.

The balance we should keep

None of this is an argument for preserving old work simply because it is familiar, nor is it an argument against AI. New tools will change how capability is built. Some repetitive tasks should disappear. Some traditional entry-level work deserves to be redesigned or removed altogether. In many contexts, that is exactly what progress should look like.

There is also real evidence that AI can help junior people become productive faster. Research on generative AI in the workplace has shown that novice and lower-skilled workers can see especially strong productivity gains when AI helps them access patterns, guidance and language that would otherwise take much longer to develop. That is a genuine opportunity. Used well, AI can lift the floor, accelerate confidence, and reduce some of the low-value friction that has historically slowed people down.

The risk is that faster output can be mistaken for deeper capability.

That distinction matters because people can look more capable earlier in their careers when AI helps them produce polished answers, draft cleaner recommendations, or complete work that previously would have taken much longer. But if that acceleration is not matched by real system exposure, real responsibility, and enough contact with ambiguity to build judgment, then some of the capability growth may be more apparent than real. The person becomes faster before they become deeply grounded. The work looks stronger before the understanding underneath it is equally strong.

That is why I think the question is not whether AI should be used in early-career development. It absolutely should. The better question is what should remain difficult on purpose. Every profession has forms of necessary friction, the parts of the work that feel messy, repetitive or uncomfortable but that teach people how things actually behave. In technical and operational environments, that often includes troubleshooting, small failures, constrained decisions, guided escalation, testing assumptions, and learning how to recover when the first answer is wrong. Remove all of that too early, and some of the judgment disappears with it.

The goal, then, is not chaos and it is not burnout. It is not sink-or-swim. It is enough friction to build judgment. Too little, and juniors produce polished outputs without being able to explain the system, trade-offs or failure modes underneath. Too much, and they become overwhelmed, unsupported and slow to progress. The sweet spot is AI-assisted learning combined with real responsibility and guided problem-solving. That is where people become more capable without being sheltered from the very experiences that form durable professional judgment.

This is where leaders need to be deliberate. AI should shorten the learning curve, not erase the proving ground. It should help people move through lower-value work faster, while preserving the diagnostic, interpretive and operational reps that turn early productivity into long-term capability. Businesses that get this balance right will not just have more efficient junior staff. They will have stronger future operators, engineers and leaders.

What leaders need to build now

If this problem is ultimately one of workforce design, then the response cannot be limited to better hiring slogans or more optimistic graduate messaging. Leaders need to decide, deliberately, what kind of capability pipeline they are building underneath the strategic layer of the business. In practice, that means treating workforce development as a resilience issue, not just a recruitment issue. The organisations that navigate AI well will not be the ones that simply automate fastest. They will be the ones that use automation without hollowing out the pathway that produces future operators, engineers and leaders. That principle is consistent with the broader labour-market evidence: AI can raise productivity and compress lower-level work, but it does not eliminate the need for practical capability underneath.

The first shift is to treat succession as a capability strategy. Too many businesses still think about succession only in terms of who might fill a senior role one day. In reality, succession is also about whether the organisation is reducing hidden dependency, transferring tacit knowledge, and building enough depth beneath critical operators and engineers to avoid brittle over-reliance on a handful of people. A business with no serious internal pathway into key technical and operational roles is not simply underdeveloped from a people perspective. It is carrying avoidable resilience risk.

The second shift is to use AI to accelerate capability formation, not replace it. This is where many organisations may need the most discipline. AI can help juniors become productive faster, and that is valuable. But the point of that speed should be to develop stronger future operators and engineers more quickly, not to conclude that the development layer is no longer needed. Used well, AI should shorten the route into judgment-building work. Used poorly, it creates polished output that masks shallow understanding. The opportunity is real, but so is the trap.

The third shift is to build structured progression from support and operations into engineering, architecture and leadership. Businesses often say they want senior people with strong judgment, practical credibility and a real understanding of how the organisation works. The simplest way to increase the odds of producing those people is to create visible pathways through the work itself. That does not mean every career must be linear, and it does not mean every leader must come from the same function. It means the business should stop treating support and operational roles as dead ends or low-status holding areas. In many organisations, they are the most important proving ground for future leadership quality.

The fourth shift is to expose people to enterprise reality, not just theory. That means real systems, real users, real constraints, real change, real outages, real trade-offs. It means giving developing staff enough contact with identity issues, endpoint and server environments, networking, backup assumptions, business platforms, integrations, and the messy truth of how work actually gets done. This is not because everyone needs to become a deep specialist in everything. It is because troubleshooting and judgment later in a career are much stronger when they were built on contact with reality earlier. The learning research is clear that inquiry, exploratory behaviour and hands-on activity matter, and the graduate-readiness evidence suggests too many people still reach the workforce without enough of that exposure.

The fifth shift is to build cross-disciplinary foundations underneath specialist roles. This matters especially in cyber, AI, architecture and other abstracted disciplines. The strongest specialists usually understand more adjacent layers than their title suggests. They know enough about the underlay to ask better questions, make better trade-offs, and challenge the hidden assumptions inside elegant-looking plans. That is one reason stronger strategic decisions so often come from people who are not only specialists, but specialists with operational grounding. In an AI-shaped economy, that kind of cross-layer understanding is likely to become more valuable, not less.

For leaders, the practical implication is straightforward. Stop measuring success only by how efficiently work is removed. Start measuring it by whether capability is being built underneath the automation. A manager who automates away junior work without creating a better pathway into judgment has improved throughput, but may also have weakened the bench. A technical leader who repeatedly hires senior-ready capability without growing internal depth may solve today’s delivery problem while quietly worsening tomorrow’s succession problem. Over time, those choices compound.

If the goal is better leadership quality, stronger resilience and deeper execution capability, then the pathway underneath those outcomes has to be designed on purpose. It will not appear automatically. And in an environment where AI makes the strategic layer easier to access, deliberate pathway design becomes more important, not less.

The questions leaders should be asking now

For leaders, the hardest part of this issue is that it often hides inside otherwise sensible decisions. Automating repetitive work is sensible. Hiring experienced people is sensible. Pushing teams toward more strategic output is sensible. None of those choices are inherently wrong. The problem appears when they are made without a corresponding plan to build the next layer of capability underneath them.

Figure 7 shows the balance leaders need to strike. Too little friction produces polished output without enough understanding, while too much produces overwhelm. Capability grows in the productive middle.

That is why this issue is worth testing explicitly inside the business.

First, have junior roles actually been redesigned to build real capability, or have they mostly been redesigned to remove work? In an AI-shaped environment, those are not the same thing. If early-career staff can produce polished outputs more quickly but are getting less exposure to troubleshooting, system behaviour, operational constraints and guided responsibility, the organisation may be improving short-term throughput while weakening long-term resilience. The evidence on AI’s impact on novice productivity makes this tension real, not theoretical.

Second, can early-career people explain how the underlying system works, or can they mainly produce the answer layer that sits on top of it? In technical and operational contexts, that distinction matters. A person who can generate a recommendation but cannot trace the dependencies, trade-offs or likely failure modes underneath it is not yet developing the kind of judgment the business will later depend on. That is exactly why hands-on exposure, exploratory learning and practical contact with real systems remain so important in capability formation.

Third, is the organisation genuinely growing future operators, engineers and leaders internally, or is it mostly buying finished capability from outside? External hiring will always matter. But where a business relies too heavily on senior-ready talent while underinvesting in the pathway underneath, it often ends up with weaker succession depth, greater key-person dependency, and a thinner bench beneath its most experienced people. The labour-market evidence already suggests that entry-level pathways are under pressure, which makes this question more urgent, not less.

These are not just workforce questions. They are resilience questions. They shape who will be able to recover the business when something important breaks, who will carry tacit knowledge forward, who will turn strategy into delivery, and who will eventually lead with credibility grounded in the work itself.

I am optimistic about AI. I think it will create new pathways, lift productivity, and improve the quality of work in countless environments. But I also think it is exposing a design challenge that leaders can no longer afford to ignore. If we use AI mainly to rise faster into abstraction without being equally deliberate about growing the people underneath it, we risk producing organisations that look more advanced on the surface while becoming thinner, shallower and more fragile underneath.

In an age obsessed with moving faster to the top, the most valuable organisations may be the ones still willing to build from the bottom.

References

  • Australian Industry Group. (2025). Manufacturing in Australia: Performance and outlook report 2025.
  • Brynjolfsson, E., Chandar, B., & Chen, R. (2025). Canaries in the coal mine? Six facts about the recent employment effects of artificial intelligence. Stanford Digital Economy Lab.
  • Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative AI at work (NBER Working Paper No. 31161). National Bureau of Economic Research. doi:10.3386/w31161
  • Department of Climate Change, Energy, the Environment and Water. (2026, April 1). Minimum Stockholding Obligation. Australian Government.
  • Department of Employment and Workplace Relations. (2020). The transition of the Australian car manufacturing sector: Outcomes and best practice: Full report. Australian Government.
  • Department of Employment and Workplace Relations. (2019, October 21). The transition of the Australian car manufacturing sector: Outcomes and best practice: Summary report. Australian Government.
  • Fernando, J. (2025, November). Technology investment and AI: What are firms telling us? Reserve Bank of Australia Bulletin. Reserve Bank of Australia.
  • Hult International Business School. (2025). New survey reveals traditional undergraduate education is failing to prepare students for work. Hult International Business School / Workplace Intelligence.
  • OECD. (2024). Curiosity. OECD Learning Compass 2030 concept notes series. Organisation for Economic Co-operation and Development.
  • Reuters. (2026, March 28). Australia to amend export finance laws to boost fuel security, PM Albanese says. Reuters.
  • SignalFire. (2025). State of talent report 2025. SignalFire.
  • World Economic Forum. (2025). The Future of Jobs Report 2025. World Economic Forum. https://www.weforum.org/publications/the-future-of-jobs-report-2025/
  • Catal, C., Ozcan, A., Donmez, E., & Kasif, A. (2023). Analysis of cyber security knowledge gaps based on cyber security body of knowledge. Education and Information Technologies, 28(2), 1809-1831. https://doi.org/10.1007/s10639-022-11261-8
  • Hussim, H., Mahmud, S. N., Halim, L., Osman, K., & Lay, A. N. (2024). A systematic literature review of informal STEM learning. European Journal of STEM Education, 9(1), 07. https://doi.org/10.20897/ejsteme/14609
0 0 votes
Article Rating
Ashley: