The Future of Work

However you are thinking about the future of work, and whatever organisational structure or ecosystem you seek to create, Bioss can support its successful implementation.

Image

At the heart of any organisation sit human judgement and decision making and the working relationships we have with colleagues, near or far. These relationships – from the Board to the front line – are the connective tissue upon which the delivery of all work depends. An effective and sustainable response to Environmental, Social Governance agendas, to the complexity of shifting geo-politics, the challenges of social media, the impact of robotics and AI, diversity and inclusion agendas, to balancing the needs of multiple stakeholders, all depend on the health of those working relationships.

Supporting our clients to be more resilient in the face of these challenges and opportunities is at the heart of Bioss’s work.

The future of work also includes the emerging working relationship between human judgement, data and the work done by Artificial Intelligence. Bioss has built on its focus on human judgement and decision making to provide innovative approaches and tools to both the governance of AI at Board and C-Suite level, as well as a dynamic understanding of the working relationships that already exist with AI in many organisations.

AI @ Work

We will have recognisable ‘working relationships’ with artificial intelligence systems, even though they are not human and cannot be accountable for their work. These relationships will develop over time – yet we none of us are quite sure just how.

AI systems are at work in a variety of ways in the decision-making ecosystem of many organisations.

As part of its Conditions Analytics suite of tools Bioss has created an ‘AI Protocol’, a powerful non-legal, non-technical framework that is designed to provide a living map of these day-to-day working relationships between people and AI systems. It can be adopted as part of the ‘safe’, ‘aligned’ or ‘ethical’ deployment of AI systems.

In addition to the AI Protocol, Bioss is a significant contributor on the British Standards Institute National Standing Committee on AI, specifically working on Board Governance and AI and ISO recommendations through CEO Robbie Stamp.

What’s the Work?

Wise governance by business and government should be based on understanding key boundaries in relation to the work humans do and the work we ‘task’ AI Systems with.

It is these boundaries that should be understood, rather than relying upon hard and fast rules. That’s the core question – “what’s the work?” Not, “is the AI system intelligent like us or is it ethical?”

As humans we test our judgements by putting them into practice and seeing whether the results are satisfactory, whether they solve the problems they were designed to solve, whether the consequences were acceptable, whether they enabled a successful response to novel problems.

The questions we ask in the Protocol and in the AI Working Relationships Appreciation (part of the Conditions Analytics Platform) are thus not value judgements – “is this good or bad?” It is the analysis that flows from asking them in the first place that matters.

Work through the impacts and implications in context. Keep the inputs and outputs under constant review and cross certain key boundaries consciously.

Image
Image

The Bioss AI Protocol

For all the fallibility of human institutions, accountability must lie with boards and governments.

Advisory

Is the work the AI doing Advisory – does it leave space for human judgement and decision-making? If so what data and assumptions lie behind the AI’s ‘advice’? And whose assumptions?

Authority

Has the AI been granted, implicitly or explicitly, any Authority – power over people, who are now ‘simply’ agents for carrying out instructions?

Agency

How much Agency has the AI been granted – the ability to commit resource and expose the organisation people or the wider society to risk (or opportunity) in a given environment, without a human being in the loop? Might agency be precisely the right thing to give? If the risks are high, what did we do to model outcomes?

Abdicating

How conscious are we – at every stage of AI deployment – about the skills and responsibilities we are at risk of Abdicating? What human skills will atrophy? Because we can replace jobs, should we, at what pace with what consequence, with what planning?

Accountability

Are the human lines of Accountability clear for the work the AI is doing? This is a critical issue and underpins each of ‘Advisory, Authority, Agency, and Abdication’.

We should treat people as people, not as so many data points, and should look to deepen and honour human capability, not to impoverish it.

That would be an ethical thing for organisations and governments to do.