Archives for the month of: September, 2019


I had the honour of speaking on a panel at the World Futures Forum on Tuesday 24th September. The opening keynote by Futurist Matthew Griffin introduced a mind-boggling number of emerging technologies between now and 2080, see the fascinating “codex“.

As the opening question on the panel session chaired by Griffin he asked me “are we prepared for the future? Is education preparing our learners for the future?” and I said something like …

No! When have we ever been prepared for the future? I’m not sure it’s the main purpose of education to produce the future workforce. I think there’s a set of issues around what we learn and how we learn. We don’t know exactly what we need to learn but I don’t think we should throw away the way we teach existing disciplines. We still need deep specialists in STEM. But we need them to collaborate in the workplace with other deep specialists: that’s where a lot of innovation comes from. We need “soft” skills of the human touch, of empathy, of ethical thinking: human skills. It’s not just about STEM and human skills though. Many of these emerging technologies feel like sci-fi. I read a lot of sci-fi. Often sci-fi is dystopian. We need historians and sociologists and philosophers too, to avoid these technologies leading us to bad futures.

It was probably more garbled than that, but that’s the gist.

Human skills was a recurrent theme of the day: adaptability, collaboration, empathy, problem solving, communication etc. There were some really good inputs about how to describe, develop and promote those skills. There was a strong sense of needing to actively develop and evidence these skills, described well by Tom Ravenscroft . There were calls from Laura Overton to redesign the way we support learning in the workplace.

Lord Jim Knight focussed on his considerable expertise around schools and made an interesting observation that “in employability conversations employers often say urgent and radical change is needed. Until it’s their own children they’re thinking about”. He called for education to do as much for wellbeing as for skills, and he railed against the over-testing in primary schools. Amen.

I feel strangely unpanicked about the idea that my children will have to retrain several times for the workforce of the future. Perhaps that’s because I never trained for a “career”. I did philosophy and literature and then followed my nose, finding myself into technology in education. The only job title a careers teacher would recognise was “bookseller” and that was early on my path. I’ve had about eight employers in my 20 years of full time work. Following my nose has served me well so it doesn’t scare me that my kids might have to do the same.

The words “work”, “jobs” and “careers” were used somewhat interchangeably today and I am realising that masks something. I have friends who are experts in “careers” and they would be the first to say that work ≠ job ≠ career. What does that unmask? Not all work is paid. Not all jobs are careers or jobs for life. Not all work pays fairly. Also, importantly, not all work is good.

Taking each point in turn …

Not all work is paid

Economists would tell you that unpaid work is a significant factor in any economy. Invisible Women by Caroline Criado-Perez describes the way that work gets done in societies. Work like cooking, cleaning, childcare and caring for the sick and elderly is often unpaid, and it is overwhelmingly done by women.

Actually there is a historic pattern that when unpaid work becomes paid work, more men start doing it. So the idea that work I used to do is being done by someone else is not a new idea. It’s just that usually it doesn’t happen to men. And this time its automation “stealing” the “jobs”.

On a different angle, Matthew Taylor from the RSA made a very salient point that the automation narrative is politically dangerous. Some sociologists have surveyed that 40% of people feel like the system of our current society should be smashed, that there are people who want chaos. He suggested we should not feed that fire by threatening loss of work to automation.

Not all jobs are careers or jobs for life

Criado-Perez documents that the majority of the part time workforce is female. Juggling multiple work roles, both paid and unpaid, is common in many cultures.

When people bemoan that our children cannot expect a job for life, I reflect that I never expected a job for life. The sectors of our economy where people had jobs for life may be a mixture of “professions” such as accountants, lawyers and engineers, and unionised skilled labour such as manufacturing, steel, construction etc. I have a strong suspicion that the data would show that for the decades these were secure jobs for life they were largely male.

Not all work pays fairly

It doesn’t take long to recognise that some of the jobs that are most materially important to society are the lowest paid. Where would we be without people to empty bins, pick crops, care for the elderly, look after our kids. The importance of this work is not correlated to the importance. So even when work is paid, it is paid according to what the worker will accept from what the employer will pay. Is it a coincidence that these lowest paid jobs are more likely to be done by immigrants, when they are the lowest paid? And yet some of these lowest paid jobs are the most human, and the least likely to be automated.

Not all work is good

Companies that make stuff and sell stuff can make profit and therefore can afford to create jobs and pay people. As long as there are people to buy the stuff, there can be work to make the stuff. And yet we know that some of this stuff is bad for people, health and the planet. Junk food, cigarettes, plastic goods, petrol cars, weapons. But these industries employ huge numbers of people and therefore there are vested interests in retaining those jobs even if the overall impact of the work they do is detrimental to our future.

To tackle the climate crisis we need to pivot to a low growth economy. Reducing steel manufacture, fossil fuel-based industries, petrol/diesel cars, car ownership, air travel, food packaging, food wastage … this will all mean a loss of jobs. But that shouldn’t stop it happening. Incidentally this is also why the idea of a red-green new deal needs exploring seriously. The UK Labour Party and its Trade Union partners need to navigate the opportunity to rethink job security in the light of a low growth green economy.

Putting all this together … universal basic income is beginning to sound like a smart way of mitigating the effects of adjusting to a low growth economy, of mitigating the loss of work to automation enabling part-time work. This would also have the benefit of valuing unpaid work and enabling lifelong learning. I’ve been reading about the history of UBI and it’s a case study of an idea that has been in and out of fashion, on both the left and the right. It’s time has come.

To come back to the emerging technologies question, Matthew Taylor pointed out that along with technologies being hard to predict, even more so are the human behaviours and cultural factors in the use of technologies. On top of that we have the ways in which the developers and suppliers of technologies have to find business cases to underpin their endeavours. Much of the consumer tech breakthroughs of the last twenty years have been catalysed through the disruption and invention of business cases.

We shouldn’t pursue every new technology just because we can. It has to be useful and ethical. The climate emergency should make us prioritise those developments that will help us tackle our biggest crisis. Technology should not be driven by what consumers want but by what humans need. That’s why we need social scientists and humanists deeply engaged with emerging technologies: and we need diverse and critical voices to shape our global priorities.

I found the event really thought-provoking and I’m very grateful to Matthew Griffin and the organising team for the invite. There is a world of thinking out there about the future of work, tech and learning. I think I’ll start with the RSA Future of Work, put on my science fiction far-future goggles for the emerging technologies codex and I’ll keep a special eye out for gender analysis in these spaces.

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

This may seem like a strange topic in the context of our digital lives, but just think of all the ways our tools helps us format-shift for convenience:

  • listening to a book rather than reading it because we’re driving to work, but will switch back to reading at bedtime
  • watching Facebook videos with subtitles on to avoid disturbing people
  • making voice-to-text memos because its easier on fingers/thumbs than typing, and we can do it while walking
  • recording voice messages on WhatsApp because it conveys emotion/mood better and might be faster

Whether we are just consuming content or preparing it for others to consume, I love  that the sender can encode a message in one format and the receiver can decode it in a format of their choice.

Accessibility is a hot topic right now, and it really has come of age. Its so useful for people to be able to format shift, for reasons of sight, hearing, fine motor capability, cognitive processing and behavioural preferences.

Years ago I recruited Jonathan, a skilled content editor with impaired hearing and a wry sense of humour. He had a stenographer come to our organisational briefing meetings and I loved watching the slight delay between the Chief Exec making a “joke”, the words of the joke appearing on Jonathan’s laptop screen and his sarcastic hmphhh. These days we could switch on the google transcribe app on my phone and the attempt at a joke would be machine-translated. Hmphhh.

I am surprised that Mcrosoft hasn’t realised the flaw with pushing Cortana voice-activation in the workplace. So many of us work in open plan offices: do we really want colleagues to overhear us scrambling about to find the document we lost, or to know we don’t use the specialist software we clearly haven’t used for ages. I’ll type, thanks.

There’s also something going on here about multi-tasking. I like to listen to Medium articles through a text-to-speech reader while I wander about the house sorting out washing. I completely understand why someone would want to re-listen to a lecture recording while cooking. Yes, I know that the evidence says we’re not as good at multi-tasking as we think.

Which leads me also to captions/subtitles. Apparently the use of subtitles is rising steeply, and not just amongst the hard-of-hearing. As well as the need to sometimes watch videos without sound, another scenario is that visual/audio alone isn’t enough to hold our attention but subtitles as well might just keep us looking at the screen. We can use subtitles as an attention management hook. I know I do: sometimes its all that keeps me from playing klondike solitaire while I’m watching a film.

Three cheers for format shifting: what’s not to like?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

bananagram words spelling out the blog post themes

Digital Lives: Formats, Privacy and Presence

I’ve been chewing over a few themes for the past six months or so and it seems time to try to blog them. They all feel connected somehow. There is much more to say to apply this to education and the workplace but I thought I’d start by laying these themes out …




I’m interested to hear what you make of these posts. Comments very welcome.

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

As I learnt in my Communication Studies A Level, the person sending a communication encodes it into a medium, and it is then decoded out of the medium by the receiver. The choice of medium for a two-way communication might necessitate a time delay between sender and receiver, such as a pigeon carrier or a paper letter in the post. Or it might be instantaneous, such as semaphore or telephone. The terms we use to describe live and not live are synchronous and asynchronous. Of course in technical terms there might be a slight delay even in a live communication, such as long distance telephone calls or a live translator, but I will count them as synchronous.

Social media platforms combined with near-ubiquitous connectivity are particularly increasing the synchronous options. More platforms now support both synchronous and asynchronous, but they also support a blurry space in between, known as near-synchronous. Some platforms show you when someone is online to read your message, and even whether they’ve read your message. They might even show you when they are typing a reply. If you’ve ever had a tense conversation on WhatsApp you’ll know the frustration of watching the “…” disappear as someone decides to delete what they had been typing.

The negotiation of norms and expectations of these sorts of platforms is rarely explicit: people adopt it clusters, they grow, and multiple cultures develop. The etiquette of exiting a Facebook messenger group about a get-together I’m not going to is awkward every time. Perhaps others have better “socmed” skills than me.

So given the implicit and evolving rules of social media, I think the near-synchronous scenario is particularly challenging to establish norms.

Personally, I find myself using the phone less and less, and preferring asynchronous platforms because:

  • it gives me permission to think before replying
  • it gives me permission to be off-grid, off-line, unavailable without prejudice
  • emails can be saved as a more discrete time-stamped artefact

In a work context, me and my colleagues are using Teams more and more. However even amongst our team of 15ish there are “residents” and “visitors” so I can never assume that someone has seen a message.

See more about the Digital Visitors and Residents model (a great replacement for the Digital Natives and Immigrants concept).

When would you send an email, when would you send a message on Teams, or send a skype message or, if they are nearby, when would you walk over to them? What determines the boundaries of the group and the scope of the collective norms? Traditionalists would tell me that the organisational structure will determine boundaries: but it doesn’t work if the role of your function is to collaborate. There is a something organic about collective adoption of tools. I’d love for someone to point me to a conceptual model describing the different factors effecting adoption of something like Teams. I suspect it’s something like:

  • does it get traction with with a critical mass or does adoption have to be universal?
  • does it require notifications and follows to be set up? because that can put people off and gives reluctant participants an excuse to not keep up
  • does it match existing organisational units or is its value precisely that it cuts across the traditional structure?
  • does the Nielson 90/9/1 rule of participatory media apply, and is there enough participants in the 1% for it to be a discussion rather than a monologue?

There’s much more to say about the potential role of Teams in an educational context, I hope to come back to that in separate post.


There’s a common complaint that being glued to your phone on the train is “anti-social” but who’s say that’s not his poorly grandad he’s messaging with? We split our attention between physical presence and digital presence. What does it mean when someone is both present in the room and also present in a social media text chat and maybe even also listening to music in their headphones. They are multi-tasking and multi-present. How many channels can we cope with in one go, particularly social channels? And if its a choice between a slightly distracted social connection or no connection at all: what is best? In particular we parents are berated for not giving our kids 100% attention. But when did that ever happen? What period of history were children given all the parents attention? Maybe that mum sat at the park on her phone is helping a friend through a crisis?

So … what matters most: presence or participation? And do we sometimes set the bar for online participation higher than for physical participation? And is multi-presence a good thing?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy and Presence.






This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

A recurring theme in conversations about “the digital age” is privacy. Facebook, Google’s “don’t be evil”, the Cambridge Analytica scandal … and then the backlashes and attempts to regulate: cookie law, GDPR, people leaving Facebook. It’s a very real question of what price do we pay for the convenience of social media platforms and googley/appley joined-up services.

It is fashionable to say that this is not a price worth paying. But that position also comes from a place of privilege: when people say “I’d rather pay in my cash than my data”, “I’d rather keep in touch with my family outside Facebook”, “I prefer to socialise face to face”. Great: those are choices that some people can make. Not everyone has the money or social power to make that choice.

I feel sad at a scenario where we give up wanting a universal everyone-is-welcome, free at the point of use social network. I don’t want to have to choose a platform.

If you’ve followed the Cambridge Analytica story you’ll know that your Facebook friends make decisions about your privacy, without your knowledge or consent. It’s not just about your personal decisions. Apparently one of the big threats to the security of at-risk children is their own grandparents being unable to resist posting photographs and giving away location information.

Is a conversation on whatsapp private compared to a conversation on Facebook? Socially it might be. A 1:1 exchange is easy to know the boundaries of. But if the other person adds someone, can that person see the previous exchange?

I have joined political groups and support groups on Facebook: do I really know who can see that? If I like something on an endometriosis support group, do I mind that other people can see it?

In the workplace I regularly confuse myself with sharing documents and online folders: who can see what, and why? I want to be collaborative in my document authoring but how do I make sure commenters know who will see their comments now and in the future?

If I designed my household for my family’s use and then gave the key to another 10 people to come and let themselves in whenever they liked, what would that do to the sense of home?

These are boundaries we negotiate in our digital lives. If I’m honest I find it a bit stressful, and I’m a “digital resident“. I can imagine it is very uncomfortable for people who feel like visitors to digital spaces.

But but but … privacy is a historical concept. It hasn’t always been an expectation. In pre-industrial England people lived in smaller groups and unless they left their village they carried their history with them. Read Thomas Hardy, Jane Austen, Charles Dickens for evidence that no-one really runs away from their past: through the six degrees of separation someone will spill the beans to our heroine about the handsome stranger.

Should we expect privacy to persevere for another century as a value, or is it a nice-to-have that could be traded off for social connection and convenience? Or am I only thinking that from a position of convenience as someone who has already married and established a career?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.