🌱 The impact of AI on jobs

The move towards a central competency

Nate B Jones (9th February 2026) claims that AI will cause a convergence of previously distinct knowledge-work roles such as engineering, product management, marketing, analysis, design, and operations. Rather than focussing on domain knowledge, new roles will be centred on orchestrating AI agents to achieve outcomes. Domain expertise will thus become a foundational skill rather than a differentiator - instead of 'expertise' or competency being underpinned by how much domain knowledge a person has built up, it will be increasingly determined by how effectively a person can direct AI systems.

This also means that instead of careers being built over years, there will be much shorter cycles driven by progress in AI, meaning skills must be continuously updated.

He recommends that all workers practice continuous and practical engagement with AI rather than cautious observation or one-off training. And rather than focussing on specific AI tools or skills, we should focus on the overall skill of directing AI agents and thinking in terms of systems, data flows, and tool-enabled workflows.

My thoughts: whilst I understand his point of view, I wonder what 'directing' or 'orchestrating' AI agents will actually mean. Unless it factors out all human judgment or variability (in which case domain knowledge will not be a foundational skill as it won't be required at all), people will still bring with them different approaches, strengths or values. I think, for example, of the subtly different approaches of project managers (deliver to deadlines) to analysts (find the right answer) - will that variability matter in the future?

In another video (14th February 2026), Jones sets out four 'surviving skills' that he claims will be the future differentiators: taste (the ability to recognise what is strategically right and not merely technically plausible, and to distinguish competent output from truly fit-for-purpose output), exquisite domain judgment (built through experience - what matters, what fails later), adaptability (the ability to learn rapidly and to continuously update working models as capabilities shift), and relentless self honesty (continuous auditing of personal value - which elements are durable and which are becoming commoditised by agents, with resulting adjustment).

His prediction of 'exquisite domain judgement' is slightly contrary to his earlier prediction (above) that domain knowledge won't be a differentiator.

He urges an 'agent-first' mindset ⏰ rather than 'AI features added to existing work'. I agree with this, and have considered the implications of this on process discovery.

The continuing importance of domain knowledge

Philipp D. Dubach in The Impossible Backhand (Feb 17 2026) makes a case of the continuing importance of domain knowledge even with the rise of AI. He uses the example of a generated photorealistic image of a tennis player that would look plausible to most people but not to tennis players who would know that the backhand depicted was impossible.

He argues that this limitation is not a temporary engineering hurdle, but a structural one rooted in how large language models (LLMs) are built and trained. He identifies three core causes: (a) next-token prediction inherently gravitates towards average, statistically probable outputs rather than exceptional or nuanced ones, (b) reinforcement-learning from human feedback (RLHF) biases models toward familiar, typical responses, and (c) β€œmodel collapse” β€” the degradation of tail quality as models increasingly train on AI-generated content β€” further narrows quality variance.

Humans add value by identifying and correcting AI mistakes, which can only be achieved by deep domain knowledge. This leads him to advise against deskilling by over reliance on AI output.