I’ve been thinking a lot about rules. We are currently operating under Covid restrictions, we’ve had the “rule of 6”, constraints on business operations, reporting and testing regulations. Everyone is talking about “the rules”. I’m also dieting at the moment, for the first time in my life!, and on a forum where everyone asks about the rules: what are we allowed? how many carbs can I have, is alcohol against the rules? And in my day job, I work on guidance and processes around how things should work in the use of technology in teaching: constraints and expectation-setting. Also I just finished watching the Queen’s Gambit which is all about chess which I can’t play but I admire. The Rules. There are rules everywhere.

But what are rules?

Guidance and Rules

There is a blurry line between rules and guidance and I guess it depends on the implicit contract between the rule-makers and the subjects.

  • In chess: you can’t really break the rules. Simple!
  • In a diet, following the guidance is described by many dieters as submitting to the rules. When people “break” the rules in a diet they often use moral terminology: sinning, being naughty, falling off the wagon.
  • In a workplace, people may or may not accept the authority of the rule-maker. In the university where I work I can think of many examples where people aren’t really accepting the authority of “the centre” to make the rules. We have a lot of guidance that could be rules if the culture allowed it.
  • In a pandemic, well … what we’re seeing right now is a mix of attitudes towards the guidance and the rules and it has a lot to do with attitudes towards government and “science”.


Some rules are enforceable, some are very hard to enforce.

The police can’t possibly have enforced sanctions on every breach of the rule of six: there aren’t enough police and it would be massively intrusive within people’s homes. But they needed the rule so that when they did need to intervene in the situation they had a rule to hook it on to.

In my workplace, there are many workarounds and exceptions to most things that we ask people to do or not do. A lot of what we to do to tighten control is try to introduce sanctions. Rarely do make it impossible to break rules.

We often talk about incentivising and disincentivising, and about nudge as a subtle incentive, and best practice. I picture it like this:

Impact: Individual vs Collective

One factor in how people respond to rules is their attitude towards the rule-makers. But I wonder if another factor in how people respond is their mindset. Rules are designed to control or influence the behaviours of individuals. But they are designed at the level of collective impact. They are designed around models of what happens if all- or most- people do something. Many rules are made in the knowledge that there will be exceptions. For some people, hunting out those exceptions seems to be a first instinct. In my workplace I’ve seen policy discussions where the proposed rule will work in 80% of cases (“the Pareto principle”) but everyone wants to discuss the 20% even when we’ve already established there will be exceptions. I am guilty of that myself. It is our culture to seek out “edge cases” and foreground those as reasons to delay on the rule-making.

Sadly, I have also seen way too many examples of people not being able to see the collective impact of following or breaking rules. If everyone parked in a Disabled Parking spot because they were “only popping in to a shop”, the system breaks down. If everyone jumped the queue there would be no queue. If no-one wore a mask because they were “uncomfortable”, there would be no barriers to transmission. The whole point of rules is to seek sustainable behaviours at the collective level. Rules are a collective endeavour.

Final Thought

I am not naturally trusting of authority. I don’t trust our current government, I reserve the right to challenge everything a government does. I don’t trust capitalism to make the rules, I don’t trust any religion to make the rules. I’ve never 100% trusted any employer, any organisation, or any person, come to think of it. But I think we need to pick our battles wisely: some rules can be followed without much negative personal impact (i.e wear a mask, for godsake), and some rules might seem overkill but maybe they are there for the collective good. Maybe they are there to manage demand patterns for example “flattening the curve” for the NHS capacity. There might be factors that come into play that we don’t understand. It’s probably complicated.

I’d quite like a simpler world some days, with rules that everyone understands and everyone sticks to. If anyone has any tips on learning chess, it’s suddenly seeming very enticing!

Update Oct 2020: I recommend reading this article about merging modalities by Valerie Irvine.

At my University, our Academic Technologies team, Academic Development Centre and temporarily formed Learning Design Consultancy Unit have been creating guidance and training to support a wide range of practices suddenly made essential because of the Covid-19 pandemic. I talked about some of them at the Jisc conference back in June.

The last few months I have been circling around the challenges of “hybrid” teaching and that’s the focus of this post. I am currently awaiting feedback from my colleagues on some guidance I’ve drafted but I thought I would also share my thoughts in the open. Comments very welcome.

The challenge is how to plan for, and deliver, taught sessions to a mixed cohort where some students are present in-person on campus and some are not. There are difficult choices my academic colleagues are having to make.

This isn’t course design from scratch, this is adjustments to existing approved modules, part of existing approved courses that students have already signed upto. The guidance pre Covid doesnt cater for the current scenario. Whatever the academic’s intentions and whatever the student’s preference, there is a chance that a proportion of any class will not be present in-person, due to delayed arrival on campus or quarantine. This is inevitable and somehow needs to be catered for.

We want to support these difficult design decisions with clear guidance but it’s hard to do that with confidence. Its made harder by a lack of agreed terminology in the education community and by some nuance between different technical set ups.

This QAA taxonomy is helpful but I disagree that hybrid and blended are interchangeable terms. To me, hybrid is a word specifically to describe a teaching session with an in-room audience and a remote audience. I don’t know why I think of that definition so strongly: clearly not everyone does. But we need a word for dual audience / dual mode / mixed mode teaching events. This would aid conversations between academics and their collaborators, and make for clearer design decisions.

There is a whole set of challenges to delivering a hybrid session in that sense. How meaningful is the participation for the remote audience? How does trying to accommodate remote audience particularly impact on the participation in the room? How much better is the student experience of a scheduled online synchronous option with limited participation, as compared to a recording watched after the event?

A related question around remote participation is the variety of options. What is the difference between using a livestream model with controlled participation options, or a passive broadcast without expectation of remote audience interaction? There is a spectrum within hybrid sessions between broadcast at one end and meaningful synchronous interaction at the other, particularly peer learning. If the student experience is at the broadcast end, it is worth considering whether  recording the session and releasing it afterwards would actually make for a better experience for both in-person participants and remote participants.

The next contentious word needing definition is hyflex. To me, hyflex is a characteristic of a course/module design where an individual student can switch between modes for different activities. They might switch day by day, or week by week. The key is they can choose whether to engage with online asynchronous, online synchronous, or in-person if that’s an option. A skilled teacher can design that. But not everyone has that level of skill (yet) AND it challenges of practices around student timetables and attendance monitoring. So I see hyflex as desirable but difficult to design. Once it’s designed though, I’d suggest that it’s easier to deliver an online asynch, online sync and in-person session than a hybrid taught session. It might take more time though and that’s a problem of logistics and workload.

So … in my mind:

Hyflex is a characteristic of a module/course, not a particular session.

Hybrid is a characteristic of a session.

A hybrid session might be a component of a hyflex module/course, but does not in itself make a course hyflex, because it’s only on component of the course.

A hybrid session is difficult to deliver without another staff member.

The best tool in the world can be used poorly if the session design isn’t clear.

The level of meaningful remote participation in a hybrid session will be determined by the skill of staff and availability of additional staff, mixed with the appropriate use of audience feedback methods and functionality. There is a threshold of meaningful participation, below which it might not add much value.

The ability to provide a hyflex course requires institutional capability around timetabling, attendance management, and quality assurance methods as well as real design skill by academics and their collaborators.


I love a metaphor, especially a food one. Trying this out …

Flour, eggs, rolling pin, tea towel
Amber Thomas 2020 CC-NC-SA

The race to “put teaching online” as a result of Covid-19 has surfaced that many people have a skewed understanding of what online learning is. Martin Weller highlights how out of date that perception is, and Christina Costa describes some of the misunderstandings.

Part of the problem is promotional rhetoric from Educational Technology companies. They sell a shiny version of the future. Where personalised means impersonal. Where learning is tracked to an inch of its existence.

Unfortunately if staff unfamiliar with blended learning hear those messages then they can forgiven for discounting “ed tech”. They will be angered by the “disruptors” saying education needs a revolution, and find themselves siding with the “resisters” who may be just as polemic and biased.

For academics who have avoided the VLE for their modules and only know it as a file store or as clickable staff training courses, that’s what they think is being asked of them right now. They think they need to create clicky content and fancy animations. It is alienating to academics who feel they would be feeding a machine, ceding control to an impersonal content development studio.

And it creates a huge suspicion of educational technologists within institutions that they are just there to “push product”, or to transform materials into something they will lose control of. Professional workflows are needed for quality and scale but this current pivot isn’t about scaling up courses for mass enrolment, it’s about translating the student learning experience on existing courses at their current scale.

As an aside, equating ed tech companies with institutional ed tech support is like equating the big pharma industry with your local pharmacist.

So how about describing the situation with this metaphor:

We are not looking to create fast food. Anytime-anyplace shiny looking homogenised standardized food. Low nutritional value but convenient.

Nor are we expecting academics to become Michelin starred chefs overnight, mastering sous vide and serving up intricate instagrammable meals.

What we need is good nutritious home cooking. Made in domestic kitchens, with good quality ingredients and prepared with care. We need dinners around the table, healthy and filling, with good conversation.

At Warwick part of the Extended Classroom approach is “recipes” which reflects our thinking that its about taking available ingredients, learning some techniques, and having agency over what you cook. We’re realising that academics who haven’t worked with us yet misunderstand what we do and what’s expected of them in this time of pivot. The more people we can reach through our messages about home cooking, the sooner we can demystify what blended learning can be.

UK Public Health Notice March 2020: Coronavirus: stay home, protect the NHS, save lives

As I keep telling my kids, I’ve never experienced anything like this in my life.

I am struggling to comprehend the enormity of the global nature of Covid-19 impact, combined with the effects our everyday life, the way it touches my own family, and how it shapes my work priorities. I swing between intellectualising it and a more visceral emotional response.

I have family who work in NHS hospitals. I have family who have been stranded in isolation on the Zaandam cruise ship and are currently being repatriated. I know people who have likely had Covid-19, or who’s families have. Like many of us, I’ve thought back on periods of illness in the last few months, could that have been a mild case? I spent a week or so with an impaired sense of smell. But it feels ridiculous to even ponder that when there are much more serious situations going on.

I am rendered mute by the disconnect between global serious life-threatening situations and the trivial impact on my little life. And yet that’s what this is: a pandemic that reaches right into our personal everyday lives, even when the illness doesn’t manifest itself in our homes.

I have a fairly big house with a garden, a view of a big open space, a husband and two boys. We get on pretty well despite the confinement. We have jobs, can afford to eat, we feel part of a community and we are well. We are the lucky ones. And yet on Thursday I hit a wall: my coping mechanisms were overloaded with challenges at work, anxiety about my loved ones and the general pressure of circumstances.

So I’ve decided today I would try to write a reflective blog post and see where it leads me. This post marks this point in time and I may change my perspective completely and read this blogpost back as privileged and niaive. I’ll have to forgive myself if that happens, I’m sure my embarrassment about a blogpost would be the least of my worries.

So: reflections …

Over the past few years there have been several topics I have found myself drawn to. Many are strangely relevant to this crisis. I am not superstitious so I attribute this to a cognitive bias toward thinking I have any kind of agency or capacity to deal with it. But for what it’s worth, these are things I have been concerned with:

  • collectivism and individualism, Brexit and the lurch to popularists like Johnson and Trump.
  • the climate crisis, how to address the paralysis on action (my own failure to act strongly).
  • the economy, work and jobs, the impact of automation and universal basic income.
  • how and why to use technology in teaching and how digital empowers better ways of working (these topics are my professional responsibility).
  • the broad impact of digital technology on our lives, on our concepts of connectedness, privacy, presence and time.
  • a fascination with science fiction about people that are confined to spaceships and planet settlements without access to earth’s fresh air: what does it do to a culture to be confined, and how that changes how we see Earth.

Books will be written about how Covid-19 changes the world. The world needs to change so perhaps some good could come of this. Emergency socialism. Reduction of non essential travel. Reappraising the value of low paid work.

It doesn’t feel ok to say that though. How can we pontificate on silver linings, because so many will die in the transition to “afterwards”. Not just vulnerable people who we love, but people who were otherwise healthy, and also the people who care for them. Many of us will lose people we love. That’s hard to acknowledge.

So it helps to read pieces that stitch together the global and the personal and try to make sense of these strange times. Here are some articles about the social and psychological impacts that have resonated with me:

I’d welcome recommendations for other pieces that help make sense of the emerging impact of this crisis. Please comment below.

I have more to say about the importance of social media and video conferencing. In my leadership role at a large UK university I have spent the last month developing contingency arrangements for supporting academic continuity through digital approaches. I will reflect on that separately, but my professional challenges are certainly part of the story of the “online learning pivot”.

As many have commented, it turns out the most important jobs to protecting our health are those deemed low skilled, and those that are low paid, in public sectors that are underfunded and private sectors with precarious contracts. Health workers, doctors and nurses. Bin men, lorry drivers, supermarket workers.

Reappraising how we value and reward work is long overdue. Clapping for the NHS is definitely not enough: we need sustainable investment that protects our key workers and public health.

I think that’s as much as I can say today. And so I will take a deep breath and carry on with another day under lockdown. Take care.

Hands forming a T shape

Hands forming a T shape

I’m a big fan of Matt Jukes’ Digital by Default blog. Matt and I crossed paths at Jisc, in fact he covered my maternity leave once! I find it fascinating that I also worked at Becta with Andy Dudfield and Matt and Andy have done some of the same roles. They are in the world of Government Digital Services and open data whereas I am peddling my skills in higher education.

Which is all a needlessly long introduction to what I want to say about Matt’s post on “multi-hyphenates“. Matt talks about product manager – delivery manager – UX people and references the concept of T-shaped people , or “generalising specialists”. We use that concept in my team a lot, especially when I’m working with Steve Ranford on our version of research software engineer roles.

It’s tempting to draw elegant diagrams about who does what in each role, but I often see “slash” roles and roles that evolve over time. It’s not just about what each role does, but also about how much work there is to do in an organisation and therefore how much space there is for dedicated specialists. As organisations grow, roles grow out from each other, like branches on a tree.

In web, what was a web manager and content officer become CMS product manager, devops lead, analyst, content designer, UX researcher, user engagement managers. In learning technology, what was a solo elearning advisor evolves to VLE manager, service manager, user support lead, multimedia advisor, learning designer, instructional designer. In change programmes project managers become surrounded by business analysts, process owners, stakeholder managers, benefits realisation leads. The work becomes bigger, it splits out to deeper specialisms. In many ways that’s what makes “digital” such an interesting field to work in, it is always evolving.

When I’m involved with recruitment I often try to ensure that candidates understand the context of the organisation. They might be used to be the solo elearning person in a small college and they will need to adjust to being one of six, in a network of 30. Or they might be used to being a test specialist in a team of 10 in a software company and they need to adjust to being the only tester in a non-IT company. Context changes the way that knowledge and skills are used.

I’ve been pondering is what this means for job satisfaction. Daniel Pink’s book “Drive: motivation in the knowledge economy” talks about autonomy, mastery and purpose. In our line of work where the boundaries keep changing and the specialisms keep deepening, we each negotiate our way through each evolutionary step. Amongst Heads of eLearning in UK universities there has been huge churn as people move up, across, diagonally (and sometimes out), to fit with the organisational restructures.

Back to T-shaped people. Some people are comfortable knowing a little about a lot, and are able to work horizontally, perhaps preferring the breadth and constant new challenge. Some people are most comfortable knowing a lot about some specific areas and get their satisfaction from gaining mastery of those areas. We need both of those types of people, as well as T-shaped people. And I guess what I’m suggesting is that these things change over time: some specialists become seen as generalists and some generalists become seen as specialists.

I love train journeys that take a route through cities, where I can stare into back gardens and kitchen windows. In each of those towns, streets and houses there is an infinite depth of lived experience. That momentary glance of a back bedroom is a view into someone’s life.

When I think about roles I try to remember to have that humility. There is breadth and depth to digital work and the roles we work within are determined as much by the size of our employing organisation as it is by any illusory truth about how to do digital things. Long may we continue to evolve into deeper and wider spaces.




amber_ALD2019 v2

Today is the 10th year of Ada Lovelace Day: an international celebration of women in Science, Technology, Engineering and Maths.

I invited the women of Warwick University IT Services to a lunch. We made some new connections and discussed ideas for our workplace. We chatted, we ate, we cupcaked. A lovely way to spend a lunchtime 😀

A huge thanks to ITS and the Equality Diversity and Inclusion team for funding our event.




I had the honour of speaking on a panel at the World Futures Forum on Tuesday 24th September. The opening keynote by Futurist Matthew Griffin introduced a mind-boggling number of emerging technologies between now and 2080, see the fascinating “codex“.

As the opening question on the panel session chaired by Griffin he asked me “are we prepared for the future? Is education preparing our learners for the future?” and I said something like …

No! When have we ever been prepared for the future? I’m not sure it’s the main purpose of education to produce the future workforce. I think there’s a set of issues around what we learn and how we learn. We don’t know exactly what we need to learn but I don’t think we should throw away the way we teach existing disciplines. We still need deep specialists in STEM. But we need them to collaborate in the workplace with other deep specialists: that’s where a lot of innovation comes from. We need “soft” skills of the human touch, of empathy, of ethical thinking: human skills. It’s not just about STEM and human skills though. Many of these emerging technologies feel like sci-fi. I read a lot of sci-fi. Often sci-fi is dystopian. We need historians and sociologists and philosophers too, to avoid these technologies leading us to bad futures.

It was probably more garbled than that, but that’s the gist.

Human skills was a recurrent theme of the day: adaptability, collaboration, empathy, problem solving, communication etc. There were some really good inputs about how to describe, develop and promote those skills. There was a strong sense of needing to actively develop and evidence these skills, described well by Tom Ravenscroft . There were calls from Laura Overton to redesign the way we support learning in the workplace.

Lord Jim Knight focussed on his considerable expertise around schools and made an interesting observation that “in employability conversations employers often say urgent and radical change is needed. Until it’s their own children they’re thinking about”. He called for education to do as much for wellbeing as for skills, and he railed against the over-testing in primary schools. Amen.

I feel strangely unpanicked about the idea that my children will have to retrain several times for the workforce of the future. Perhaps that’s because I never trained for a “career”. I did philosophy and literature and then followed my nose, finding myself into technology in education. The only job title a careers teacher would recognise was “bookseller” and that was early on my path. I’ve had about eight employers in my 20 years of full time work. Following my nose has served me well so it doesn’t scare me that my kids might have to do the same.

The words “work”, “jobs” and “careers” were used somewhat interchangeably today and I am realising that masks something. I have friends who are experts in “careers” and they would be the first to say that work ≠ job ≠ career. What does that unmask? Not all work is paid. Not all jobs are careers or jobs for life. Not all work pays fairly. Also, importantly, not all work is good.

Taking each point in turn …

Not all work is paid

Economists would tell you that unpaid work is a significant factor in any economy. Invisible Women by Caroline Criado-Perez describes the way that work gets done in societies. Work like cooking, cleaning, childcare and caring for the sick and elderly is often unpaid, and it is overwhelmingly done by women.

Actually there is a historic pattern that when unpaid work becomes paid work, more men start doing it. So the idea that work I used to do is being done by someone else is not a new idea. It’s just that usually it doesn’t happen to men. And this time its automation “stealing” the “jobs”.

On a different angle, Matthew Taylor from the RSA made a very salient point that the automation narrative is politically dangerous. Some sociologists have surveyed that 40% of people feel like the system of our current society should be smashed, that there are people who want chaos. He suggested we should not feed that fire by threatening loss of work to automation.

Not all jobs are careers or jobs for life

Criado-Perez documents that the majority of the part time workforce is female. Juggling multiple work roles, both paid and unpaid, is common in many cultures.

When people bemoan that our children cannot expect a job for life, I reflect that I never expected a job for life. The sectors of our economy where people had jobs for life may be a mixture of “professions” such as accountants, lawyers and engineers, and unionised skilled labour such as manufacturing, steel, construction etc. I have a strong suspicion that the data would show that for the decades these were secure jobs for life they were largely male.

Not all work pays fairly

It doesn’t take long to recognise that some of the jobs that are most materially important to society are the lowest paid. Where would we be without people to empty bins, pick crops, care for the elderly, look after our kids. The importance of this work is not correlated to the importance. So even when work is paid, it is paid according to what the worker will accept from what the employer will pay. Is it a coincidence that these lowest paid jobs are more likely to be done by immigrants, when they are the lowest paid? And yet some of these lowest paid jobs are the most human, and the least likely to be automated.

Not all work is good

Companies that make stuff and sell stuff can make profit and therefore can afford to create jobs and pay people. As long as there are people to buy the stuff, there can be work to make the stuff. And yet we know that some of this stuff is bad for people, health and the planet. Junk food, cigarettes, plastic goods, petrol cars, weapons. But these industries employ huge numbers of people and therefore there are vested interests in retaining those jobs even if the overall impact of the work they do is detrimental to our future.

To tackle the climate crisis we need to pivot to a low growth economy. Reducing steel manufacture, fossil fuel-based industries, petrol/diesel cars, car ownership, air travel, food packaging, food wastage … this will all mean a loss of jobs. But that shouldn’t stop it happening. Incidentally this is also why the idea of a red-green new deal needs exploring seriously. The UK Labour Party and its Trade Union partners need to navigate the opportunity to rethink job security in the light of a low growth green economy.

Putting all this together … universal basic income is beginning to sound like a smart way of mitigating the effects of adjusting to a low growth economy, of mitigating the loss of work to automation enabling part-time work. This would also have the benefit of valuing unpaid work and enabling lifelong learning. I’ve been reading about the history of UBI and it’s a case study of an idea that has been in and out of fashion, on both the left and the right. It’s time has come.

To come back to the emerging technologies question, Matthew Taylor pointed out that along with technologies being hard to predict, even more so are the human behaviours and cultural factors in the use of technologies. On top of that we have the ways in which the developers and suppliers of technologies have to find business cases to underpin their endeavours. Much of the consumer tech breakthroughs of the last twenty years have been catalysed through the disruption and invention of business cases.

We shouldn’t pursue every new technology just because we can. It has to be useful and ethical. The climate emergency should make us prioritise those developments that will help us tackle our biggest crisis. Technology should not be driven by what consumers want but by what humans need. That’s why we need social scientists and humanists deeply engaged with emerging technologies: and we need diverse and critical voices to shape our global priorities.

I found the event really thought-provoking and I’m very grateful to Matthew Griffin and the organising team for the invite. There is a world of thinking out there about the future of work, tech and learning. I think I’ll start with the RSA Future of Work, put on my science fiction far-future goggles for the emerging technologies codex and I’ll keep a special eye out for gender analysis in these spaces.

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

This may seem like a strange topic in the context of our digital lives, but just think of all the ways our tools helps us format-shift for convenience:

  • listening to a book rather than reading it because we’re driving to work, but will switch back to reading at bedtime
  • watching Facebook videos with subtitles on to avoid disturbing people
  • making voice-to-text memos because its easier on fingers/thumbs than typing, and we can do it while walking
  • recording voice messages on WhatsApp because it conveys emotion/mood better and might be faster

Whether we are just consuming content or preparing it for others to consume, I love  that the sender can encode a message in one format and the receiver can decode it in a format of their choice.

Accessibility is a hot topic right now, and it really has come of age. Its so useful for people to be able to format shift, for reasons of sight, hearing, fine motor capability, cognitive processing and behavioural preferences.

Years ago I recruited Jonathan, a skilled content editor with impaired hearing and a wry sense of humour. He had a stenographer come to our organisational briefing meetings and I loved watching the slight delay between the Chief Exec making a “joke”, the words of the joke appearing on Jonathan’s laptop screen and his sarcastic hmphhh. These days we could switch on the google transcribe app on my phone and the attempt at a joke would be machine-translated. Hmphhh.

I am surprised that Mcrosoft hasn’t realised the flaw with pushing Cortana voice-activation in the workplace. So many of us work in open plan offices: do we really want colleagues to overhear us scrambling about to find the document we lost, or to know we don’t use the specialist software we clearly haven’t used for ages. I’ll type, thanks.

There’s also something going on here about multi-tasking. I like to listen to Medium articles through a text-to-speech reader while I wander about the house sorting out washing. I completely understand why someone would want to re-listen to a lecture recording while cooking. Yes, I know that the evidence says we’re not as good at multi-tasking as we think.

Which leads me also to captions/subtitles. Apparently the use of subtitles is rising steeply, and not just amongst the hard-of-hearing. As well as the need to sometimes watch videos without sound, another scenario is that visual/audio alone isn’t enough to hold our attention but subtitles as well might just keep us looking at the screen. We can use subtitles as an attention management hook. I know I do: sometimes its all that keeps me from playing klondike solitaire while I’m watching a film.

Three cheers for format shifting: what’s not to like?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

bananagram words spelling out the blog post themes

Digital Lives: Formats, Privacy and Presence

I’ve been chewing over a few themes for the past six months or so and it seems time to try to blog them. They all feel connected somehow. There is much more to say to apply this to education and the workplace but I thought I’d start by laying these themes out …




I’m interested to hear what you make of these posts. Comments very welcome.

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

As I learnt in my Communication Studies A Level, the person sending a communication encodes it into a medium, and it is then decoded out of the medium by the receiver. The choice of medium for a two-way communication might necessitate a time delay between sender and receiver, such as a pigeon carrier or a paper letter in the post. Or it might be instantaneous, such as semaphore or telephone. The terms we use to describe live and not live are synchronous and asynchronous. Of course in technical terms there might be a slight delay even in a live communication, such as long distance telephone calls or a live translator, but I will count them as synchronous.

Social media platforms combined with near-ubiquitous connectivity are particularly increasing the synchronous options. More platforms now support both synchronous and asynchronous, but they also support a blurry space in between, known as near-synchronous. Some platforms show you when someone is online to read your message, and even whether they’ve read your message. They might even show you when they are typing a reply. If you’ve ever had a tense conversation on WhatsApp you’ll know the frustration of watching the “…” disappear as someone decides to delete what they had been typing.

The negotiation of norms and expectations of these sorts of platforms is rarely explicit: people adopt it clusters, they grow, and multiple cultures develop. The etiquette of exiting a Facebook messenger group about a get-together I’m not going to is awkward every time. Perhaps others have better “socmed” skills than me.

So given the implicit and evolving rules of social media, I think the near-synchronous scenario is particularly challenging to establish norms.

Personally, I find myself using the phone less and less, and preferring asynchronous platforms because:

  • it gives me permission to think before replying
  • it gives me permission to be off-grid, off-line, unavailable without prejudice
  • emails can be saved as a more discrete time-stamped artefact

In a work context, me and my colleagues are using Teams more and more. However even amongst our team of 15ish there are “residents” and “visitors” so I can never assume that someone has seen a message.

See more about the Digital Visitors and Residents model (a great replacement for the Digital Natives and Immigrants concept).

When would you send an email, when would you send a message on Teams, or send a skype message or, if they are nearby, when would you walk over to them? What determines the boundaries of the group and the scope of the collective norms? Traditionalists would tell me that the organisational structure will determine boundaries: but it doesn’t work if the role of your function is to collaborate. There is a something organic about collective adoption of tools. I’d love for someone to point me to a conceptual model describing the different factors effecting adoption of something like Teams. I suspect it’s something like:

  • does it get traction with with a critical mass or does adoption have to be universal?
  • does it require notifications and follows to be set up? because that can put people off and gives reluctant participants an excuse to not keep up
  • does it match existing organisational units or is its value precisely that it cuts across the traditional structure?
  • does the Nielson 90/9/1 rule of participatory media apply, and is there enough participants in the 1% for it to be a discussion rather than a monologue?

There’s much more to say about the potential role of Teams in an educational context, I hope to come back to that in separate post.


There’s a common complaint that being glued to your phone on the train is “anti-social” but who’s say that’s not his poorly grandad he’s messaging with? We split our attention between physical presence and digital presence. What does it mean when someone is both present in the room and also present in a social media text chat and maybe even also listening to music in their headphones. They are multi-tasking and multi-present. How many channels can we cope with in one go, particularly social channels? And if its a choice between a slightly distracted social connection or no connection at all: what is best? In particular we parents are berated for not giving our kids 100% attention. But when did that ever happen? What period of history were children given all the parents attention? Maybe that mum sat at the park on her phone is helping a friend through a crisis?

So … what matters most: presence or participation? And do we sometimes set the bar for online participation higher than for physical participation? And is multi-presence a good thing?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy and Presence.