🌱 The impact of AI on jobs

The move towards a central competency

Nate B Jones (9th February 2026) claims that AI will cause a convergence of previously distinct knowledge-work roles such as engineering, product management, marketing, analysis, design, and operations. Rather than focussing on domain knowledge, new roles will be centred on orchestrating AI agents to achieve outcomes. Domain expertise will thus become a foundational skill rather than a differentiator - instead of 'expertise' or competency being underpinned by how much domain knowledge a person has built up, it will be increasingly determined by how effectively a person can direct AI systems.

This also means that instead of careers being built over years, there will be much shorter cycles driven by progress in AI, meaning skills must be continuously updated.

He recommends that all workers practice continuous and practical engagement with AI rather than cautious observation or one-off training. And rather than focussing on specific AI tools or skills, we should focus on the overall skill of directing AI agents and thinking in terms of systems, data flows, and tool-enabled workflows.

My thoughts: whilst I understand his point of view, I wonder what 'directing' or 'orchestrating' AI agents will actually mean. Unless it factors out all human judgment or variability (in which case domain knowledge will not be a foundational skill as it won't be required at all), people will still bring with them different approaches, strengths or values. I think, for example, of the subtly different approaches of project managers (deliver to deadlines) to analysts (find the right answer) - will that variability matter in the future?

In another video (14th February 2026), Jones sets out four 'surviving skills' that he claims will be the future differentiators: taste (the ability to recognise what is strategically right and not merely technically plausible, and to distinguish competent output from truly fit-for-purpose output), exquisite domain judgment (built through experience - what matters, what fails later), adaptability (the ability to learn rapidly and to continuously update working models as capabilities shift), and relentless self honesty (continuous auditing of personal value - which elements are durable and which are becoming commoditised by agents, with resulting adjustment).

His prediction of 'exquisite domain judgement' is slightly contrary to his earlier prediction (above) that domain knowledge won't be a differentiator.

He urges an 'agent-first' mindset ⏰ rather than 'AI features added to existing work'. I agree with this, and have considered the implications of this on process discovery.