amber_ALD2019 v2

Today is the 10th year of Ada Lovelace Day: an international celebration of women in Science, Technology, Engineering and Maths.

I invited the women of Warwick University IT Services to a lunch. We made some new connections and discussed ideas for our workplace. We chatted, we ate, we cupcaked. A lovely way to spend a lunchtime 😀

A huge thanks to ITS and the Equality Diversity and Inclusion team for funding our event.

 

 

Advertisements

WFF_logo

I had the honour of speaking on a panel at the World Futures Forum on Tuesday 24th September. The opening keynote by Futurist Matthew Griffin introduced a mind-boggling number of emerging technologies between now and 2080, see the fascinating “codex“.

As the opening question on the panel session chaired by Griffin he asked me “are we prepared for the future? Is education preparing our learners for the future?” and I said something like …

No! When have we ever been prepared for the future? I’m not sure it’s the main purpose of education to produce the future workforce. I think there’s a set of issues around what we learn and how we learn. We don’t know exactly what we need to learn but I don’t think we should throw away the way we teach existing disciplines. We still need deep specialists in STEM. But we need them to collaborate in the workplace with other deep specialists: that’s where a lot of innovation comes from. We need “soft” skills of the human touch, of empathy, of ethical thinking: human skills. It’s not just about STEM and human skills though. Many of these emerging technologies feel like sci-fi. I read a lot of sci-fi. Often sci-fi is dystopian. We need historians and sociologists and philosophers too, to avoid these technologies leading us to bad futures.

It was probably more garbled than that, but that’s the gist.

Human skills was a recurrent theme of the day: adaptability, collaboration, empathy, problem solving, communication etc. There were some really good inputs about how to describe, develop and promote those skills. There was a strong sense of needing to actively develop and evidence these skills, described well by Tom Ravenscroft . There were calls from Laura Overton to redesign the way we support learning in the workplace.

Lord Jim Knight focussed on his considerable expertise around schools and made an interesting observation that “in employability conversations employers often say urgent and radical change is needed. Until it’s their own children they’re thinking about”. He called for education to do as much for wellbeing as for skills, and he railed against the over-testing in primary schools. Amen.

I feel strangely unpanicked about the idea that my children will have to retrain several times for the workforce of the future. Perhaps that’s because I never trained for a “career”. I did philosophy and literature and then followed my nose, finding myself into technology in education. The only job title a careers teacher would recognise was “bookseller” and that was early on my path. I’ve had about eight employers in my 20 years of full time work. Following my nose has served me well so it doesn’t scare me that my kids might have to do the same.

The words “work”, “jobs” and “careers” were used somewhat interchangeably today and I am realising that masks something. I have friends who are experts in “careers” and they would be the first to say that work ≠ job ≠ career. What does that unmask? Not all work is paid. Not all jobs are careers or jobs for life. Not all work pays fairly. Also, importantly, not all work is good.

Taking each point in turn …

Not all work is paid

Economists would tell you that unpaid work is a significant factor in any economy. Invisible Women by Caroline Criado-Perez describes the way that work gets done in societies. Work like cooking, cleaning, childcare and caring for the sick and elderly is often unpaid, and it is overwhelmingly done by women.

Actually there is a historic pattern that when unpaid work becomes paid work, more men start doing it. So the idea that work I used to do is being done by someone else is not a new idea. It’s just that usually it doesn’t happen to men. And this time its automation “stealing” the “jobs”.

On a different angle, Matthew Taylor from the RSA made a very salient point that the automation narrative is politically dangerous. Some sociologists have surveyed that 40% of people feel like the system of our current society should be smashed, that there are people who want chaos. He suggested we should not feed that fire by threatening loss of work to automation.

Not all jobs are careers or jobs for life

Criado-Perez documents that the majority of the part time workforce is female. Juggling multiple work roles, both paid and unpaid, is common in many cultures.

When people bemoan that our children cannot expect a job for life, I reflect that I never expected a job for life. The sectors of our economy where people had jobs for life may be a mixture of “professions” such as accountants, lawyers and engineers, and unionised skilled labour such as manufacturing, steel, construction etc. I have a strong suspicion that the data would show that for the decades these were secure jobs for life they were largely male.

Not all work pays fairly

It doesn’t take long to recognise that some of the jobs that are most materially important to society are the lowest paid. Where would we be without people to empty bins, pick crops, care for the elderly, look after our kids. The importance of this work is not correlated to the importance. So even when work is paid, it is paid according to what the worker will accept from what the employer will pay. Is it a coincidence that these lowest paid jobs are more likely to be done by immigrants, when they are the lowest paid? And yet some of these lowest paid jobs are the most human, and the least likely to be automated.

Not all work is good

Companies that make stuff and sell stuff can make profit and therefore can afford to create jobs and pay people. As long as there are people to buy the stuff, there can be work to make the stuff. And yet we know that some of this stuff is bad for people, health and the planet. Junk food, cigarettes, plastic goods, petrol cars, weapons. But these industries employ huge numbers of people and therefore there are vested interests in retaining those jobs even if the overall impact of the work they do is detrimental to our future.

To tackle the climate crisis we need to pivot to a low growth economy. Reducing steel manufacture, fossil fuel-based industries, petrol/diesel cars, car ownership, air travel, food packaging, food wastage … this will all mean a loss of jobs. But that shouldn’t stop it happening. Incidentally this is also why the idea of a red-green new deal needs exploring seriously. The UK Labour Party and its Trade Union partners need to navigate the opportunity to rethink job security in the light of a low growth green economy.

Putting all this together … universal basic income is beginning to sound like a smart way of mitigating the effects of adjusting to a low growth economy, of mitigating the loss of work to automation enabling part-time work. This would also have the benefit of valuing unpaid work and enabling lifelong learning. I’ve been reading about the history of UBI and it’s a case study of an idea that has been in and out of fashion, on both the left and the right. It’s time has come.

To come back to the emerging technologies question, Matthew Taylor pointed out that along with technologies being hard to predict, even more so are the human behaviours and cultural factors in the use of technologies. On top of that we have the ways in which the developers and suppliers of technologies have to find business cases to underpin their endeavours. Much of the consumer tech breakthroughs of the last twenty years have been catalysed through the disruption and invention of business cases.

We shouldn’t pursue every new technology just because we can. It has to be useful and ethical. The climate emergency should make us prioritise those developments that will help us tackle our biggest crisis. Technology should not be driven by what consumers want but by what humans need. That’s why we need social scientists and humanists deeply engaged with emerging technologies: and we need diverse and critical voices to shape our global priorities.

I found the event really thought-provoking and I’m very grateful to Matthew Griffin and the organising team for the invite. There is a world of thinking out there about the future of work, tech and learning. I think I’ll start with the RSA Future of Work, put on my science fiction far-future goggles for the emerging technologies codex and I’ll keep a special eye out for gender analysis in these spaces.

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

This may seem like a strange topic in the context of our digital lives, but just think of all the ways our tools helps us format-shift for convenience:

  • listening to a book rather than reading it because we’re driving to work, but will switch back to reading at bedtime
  • watching Facebook videos with subtitles on to avoid disturbing people
  • making voice-to-text memos because its easier on fingers/thumbs than typing, and we can do it while walking
  • recording voice messages on WhatsApp because it conveys emotion/mood better and might be faster

Whether we are just consuming content or preparing it for others to consume, I love  that the sender can encode a message in one format and the receiver can decode it in a format of their choice.

Accessibility is a hot topic right now, and it really has come of age. Its so useful for people to be able to format shift, for reasons of sight, hearing, fine motor capability, cognitive processing and behavioural preferences.

Years ago I recruited Jonathan, a skilled content editor with impaired hearing and a wry sense of humour. He had a stenographer come to our organisational briefing meetings and I loved watching the slight delay between the Chief Exec making a “joke”, the words of the joke appearing on Jonathan’s laptop screen and his sarcastic hmphhh. These days we could switch on the google transcribe app on my phone and the attempt at a joke would be machine-translated. Hmphhh.

I am surprised that Mcrosoft hasn’t realised the flaw with pushing Cortana voice-activation in the workplace. So many of us work in open plan offices: do we really want colleagues to overhear us scrambling about to find the document we lost, or to know we don’t use the specialist software we clearly haven’t used for ages. I’ll type, thanks.

There’s also something going on here about multi-tasking. I like to listen to Medium articles through a text-to-speech reader while I wander about the house sorting out washing. I completely understand why someone would want to re-listen to a lecture recording while cooking. Yes, I know that the evidence says we’re not as good at multi-tasking as we think.

Which leads me also to captions/subtitles. Apparently the use of subtitles is rising steeply, and not just amongst the hard-of-hearing. As well as the need to sometimes watch videos without sound, another scenario is that visual/audio alone isn’t enough to hold our attention but subtitles as well might just keep us looking at the screen. We can use subtitles as an attention management hook. I know I do: sometimes its all that keeps me from playing klondike solitaire while I’m watching a film.

Three cheers for format shifting: what’s not to like?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

bananagram words spelling out the blog post themes

Digital Lives: Formats, Privacy and Presence

I’ve been chewing over a few themes for the past six months or so and it seems time to try to blog them. They all feel connected somehow. There is much more to say to apply this to education and the workplace but I thought I’d start by laying these themes out …

Formats

Privacy

Presence

I’m interested to hear what you make of these posts. Comments very welcome.

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

As I learnt in my Communication Studies A Level, the person sending a communication encodes it into a medium, and it is then decoded out of the medium by the receiver. The choice of medium for a two-way communication might necessitate a time delay between sender and receiver, such as a pigeon carrier or a paper letter in the post. Or it might be instantaneous, such as semaphore or telephone. The terms we use to describe live and not live are synchronous and asynchronous. Of course in technical terms there might be a slight delay even in a live communication, such as long distance telephone calls or a live translator, but I will count them as synchronous.

Social media platforms combined with near-ubiquitous connectivity are particularly increasing the synchronous options. More platforms now support both synchronous and asynchronous, but they also support a blurry space in between, known as near-synchronous. Some platforms show you when someone is online to read your message, and even whether they’ve read your message. They might even show you when they are typing a reply. If you’ve ever had a tense conversation on WhatsApp you’ll know the frustration of watching the “…” disappear as someone decides to delete what they had been typing.

The negotiation of norms and expectations of these sorts of platforms is rarely explicit: people adopt it clusters, they grow, and multiple cultures develop. The etiquette of exiting a Facebook messenger group about a get-together I’m not going to is awkward every time. Perhaps others have better “socmed” skills than me.

So given the implicit and evolving rules of social media, I think the near-synchronous scenario is particularly challenging to establish norms.

Personally, I find myself using the phone less and less, and preferring asynchronous platforms because:

  • it gives me permission to think before replying
  • it gives me permission to be off-grid, off-line, unavailable without prejudice
  • emails can be saved as a more discrete time-stamped artefact

In a work context, me and my colleagues are using Teams more and more. However even amongst our team of 15ish there are “residents” and “visitors” so I can never assume that someone has seen a message.

See more about the Digital Visitors and Residents model (a great replacement for the Digital Natives and Immigrants concept).

When would you send an email, when would you send a message on Teams, or send a skype message or, if they are nearby, when would you walk over to them? What determines the boundaries of the group and the scope of the collective norms? Traditionalists would tell me that the organisational structure will determine boundaries: but it doesn’t work if the role of your function is to collaborate. There is a something organic about collective adoption of tools. I’d love for someone to point me to a conceptual model describing the different factors effecting adoption of something like Teams. I suspect it’s something like:

  • does it get traction with with a critical mass or does adoption have to be universal?
  • does it require notifications and follows to be set up? because that can put people off and gives reluctant participants an excuse to not keep up
  • does it match existing organisational units or is its value precisely that it cuts across the traditional structure?
  • does the Nielson 90/9/1 rule of participatory media apply, and is there enough participants in the 1% for it to be a discussion rather than a monologue?

There’s much more to say about the potential role of Teams in an educational context, I hope to come back to that in separate post.

Hmmmm.

There’s a common complaint that being glued to your phone on the train is “anti-social” but who’s say that’s not his poorly grandad he’s messaging with? We split our attention between physical presence and digital presence. What does it mean when someone is both present in the room and also present in a social media text chat and maybe even also listening to music in their headphones. They are multi-tasking and multi-present. How many channels can we cope with in one go, particularly social channels? And if its a choice between a slightly distracted social connection or no connection at all: what is best? In particular we parents are berated for not giving our kids 100% attention. But when did that ever happen? What period of history were children given all the parents attention? Maybe that mum sat at the park on her phone is helping a friend through a crisis?

So … what matters most: presence or participation? And do we sometimes set the bar for online participation higher than for physical participation? And is multi-presence a good thing?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy and Presence.

 

 

 

 

 

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

A recurring theme in conversations about “the digital age” is privacy. Facebook, Google’s “don’t be evil”, the Cambridge Analytica scandal … and then the backlashes and attempts to regulate: cookie law, GDPR, people leaving Facebook. It’s a very real question of what price do we pay for the convenience of social media platforms and googley/appley joined-up services.

It is fashionable to say that this is not a price worth paying. But that position also comes from a place of privilege: when people say “I’d rather pay in my cash than my data”, “I’d rather keep in touch with my family outside Facebook”, “I prefer to socialise face to face”. Great: those are choices that some people can make. Not everyone has the money or social power to make that choice.

I feel sad at a scenario where we give up wanting a universal everyone-is-welcome, free at the point of use social network. I don’t want to have to choose a platform.

If you’ve followed the Cambridge Analytica story you’ll know that your Facebook friends make decisions about your privacy, without your knowledge or consent. It’s not just about your personal decisions. Apparently one of the big threats to the security of at-risk children is their own grandparents being unable to resist posting photographs and giving away location information.

Is a conversation on whatsapp private compared to a conversation on Facebook? Socially it might be. A 1:1 exchange is easy to know the boundaries of. But if the other person adds someone, can that person see the previous exchange?

I have joined political groups and support groups on Facebook: do I really know who can see that? If I like something on an endometriosis support group, do I mind that other people can see it?

In the workplace I regularly confuse myself with sharing documents and online folders: who can see what, and why? I want to be collaborative in my document authoring but how do I make sure commenters know who will see their comments now and in the future?

If I designed my household for my family’s use and then gave the key to another 10 people to come and let themselves in whenever they liked, what would that do to the sense of home?

These are boundaries we negotiate in our digital lives. If I’m honest I find it a bit stressful, and I’m a “digital resident“. I can imagine it is very uncomfortable for people who feel like visitors to digital spaces.

But but but … privacy is a historical concept. It hasn’t always been an expectation. In pre-industrial England people lived in smaller groups and unless they left their village they carried their history with them. Read Thomas Hardy, Jane Austen, Charles Dickens for evidence that no-one really runs away from their past: through the six degrees of separation someone will spill the beans to our heroine about the handsome stranger.

Should we expect privacy to persevere for another century as a value, or is it a nice-to-have that could be traded off for social connection and convenience? Or am I only thinking that from a position of convenience as someone who has already married and established a career?

This is one of a series of posts on Digital Lives: Three Themes … Formats, Privacy, and Presence.

 

There is a collective open blogging project in the learning technology professional network in the UK. I would like to contribute my thoughts on change and transformation. I’ve written this quickly, without links and proofreading and I may revise it.

Read more about #openblog19

Change and Transformation

We are at a point on the UK HE sector maturity curve where there is a groundswell of predictions that we’re about to be disrupted. FE probably went through this pain a decade ago, so this is a more HE specific post. I work in a Russell Group University so this probably reflects that too. I can’t claim to speak in universals. But for what it’s worth, here are my reflections on what change and transformation feel like.

I am in a position of power in my institution, at the intersect of IT and academic development. I have the authority to initiate workstreams and projects. I participate in professional networks and try to keep myself briefed on new approaches.

I could write you a lovely vision piece on what a HE education should look like in 2030 and the role that technology could play.

But that’s not my job. My job is to understand where we are now (A) and to derive from institutional priorities where we are aiming to be (B). The hardest thing is not describing the destination but working out how to get from A to B.

How to get there requires delicate footwork between the solutions supply side and the requirements demand side. IT has to be robust, so we should value caution, and we should consider sustainability and scale. Academic practice development is personal and needs to be nurtured, that needs space and time and fresh air.

Sometimes innovation is not transferred or scaled because we have grown orchids in greenhouses. The orchid can’t be replanted. I accept the need for orchids but we can’t grow fields of orchids. Boutique practices, for highly motivated and skilled staff, with small cohorts of students who are highly engaged in their learning: go for it. For many academics that is not possible. They have big groups, reluctant learners, and little time for developing their own skills. They are the early and late majority, and they need and our help more than the orchid growers.

I have to decide what my team’s priorities should be, and how we should respond to emerging practices:

  • Watching brief
  • Stay neutral
  • Intervene if we think it’s an unhelpful approach
  • Endorse and amplify if we think it’s a good approach
  • Encourage cost/benefit analysis
  • Try not to be defensive about the limitations of our own tools
  • Provide hands-on support
  • Support replication of practices by describing and promoting
  • Scope technical provision to better support practices
  • ….

These are just some of the tactical decisions we make every week. Some academics might think we’re slow to respond, or slow to provide the technology they want, but we’re trying to weigh all this up.

This all sounds quite defensive and defeatist.

But actually we have made huge progress this way at my institution. We’ve gone from no central VLE to a trusted shared platform in under seven years. That might sound like a long time but it’s been organic, recognising each department has its own trajectory. We’ve gone from DIY, high overhead policy-less lecture recording to a central service in the same timeframe. It’s slower for being opt-in but I think it’s better that way. We have students onside and I think my team is seen as helpers rather than police.

One of the lessons I’ve learnt is that supplying projects/strategies/solutions ahead of need is a frustrating and pointless task. Here are a range of lessons learnt, with weird analogies thrown in for free:

  • There’s the Dead Bird problem. Our open educational resource repository withered because there wasn’t enough demand. It’s not enough to want to supply it, there has to be demand.
  • The shiny output problem: some projects can have a very strong pull from a senior champion who wants something to happen but is hoping if we build it, the rest of the university will come. They don’t come.
  • The sustainability problem: I have been a metaphorical midwife to babies I had believed would be raised by others, then have been left holding the baby.

In other words, change is hard. It isn’t just thinking up future states but getting there. Often on choppy seas, with makeshift boats.

And more: often we need a flotilla of boats, a loose coalition of learning technologists, staff developers, systems managers, academic skills advisors, administrators and quality managers. “Head east!” We have different style of boats and different types of crews, and different reasons for our voyages. Travelling together slows some of us down, but also creates a tailwind for others. I did warn you about dodgy metaphors.

Transformation

It sounds so shiny and hard and clean. It sounds metal and futuristic. But real transformation has an organic feel: real roots and dirty soil, the sweat from hard work. It’s messy.

Digital transformation isn’t an end point, it’s a process. It is challenging conversations and difficult decisions and change in parts of the organisation that have been ignored. It’s listening to the naysayer and learning what hasn’t worked before and why. It isn’t assuming that something is missing because no one thought of it: it is finding out why something is missing.

I’m working on accessibility strategy at the moment and it’s messy, it’s a long to do list and no dedicated resource. Yet there are lots us trying to make progress. It’s fertile ground that needs watering with attention. When we come out of the map-making and plan-writing fog it will seem obvious looking back, why we did what we did.

I have to have faith in that, I’ve entered into the messiness so many times in my career, and I’ve come out the other side with progress.

In summary …

Change and transformation are hard, and messy, and organic. Sometimes it’s only looking back that you can appreciate what you achieved.

I live a privileged life and have the luxury of laughing at this Punch cartoon and thinking about the gender pay gap and representation of women in tech.

On this International Women’s Day I would like to mark the more global challenges faced by women.

  • Domestic violence
  • Reproductive health
  • Freedom of movement
  • Poverty
  • War

There aren’t any funny cartoons about rape in warzones, or women unable to control their fertility, or women killed by men.

We can’t be complacent: there is a nasty thread of misogyny in UK society too.

Last year’s Handmaid’s Tale was praised for being “topical” and “timely”. That’s such a depressing thought. One hundred and one years since the Suffragettes success, it sometimes feels like we could enter a new dark ages, with our progress stripped away.

So I don’t think the gender wars are won: I think we need to remain vigilant.

And that’s why I am writing this post to celebrate women, and the women in my life.

My mum, my sister, my many amazing relatives around the world.

My colleagues, my support network, my friends and neighbours.

For every woman who has been brave, and truthful, and fought to be heard:

Happy International Women’s Day 2019.

It’s been an interesting week for me. At work there are some things to do that require me to pick an approach for how to do it. I’ve been aware of needing to question my default approaches, and perhaps to be braver sometimes.

And at the national level: Brexit Brexit Brexit.

I went to a Constituency Labour Party meeting on Thursday and it was instructive. A member had proposed a motion to give a clear backing to our MP (Labour, Remain) to endorse a People’s Vote / Second Referendum.

If you’re expecting me to describe a chaotic shouting match of binary positions and dogmatism you’ll be disappointed. The discussion was courteous and nuanced, speakers took a range of positions and I learnt a lot from everyone’s articulate statements.

This post reflects the topics we covered: https://labourheartlands.com/labours-composite-motion-was-always-an-option-for-a-second-referendum-not-a-policy/ .

I had not been to many of these meetings so I niaively got stuck in and suggested an amendment to the motion from a referendum being “the best option” to it being “an option”. The discussions happened, and it was only at the end that I realised my mistake.

I had urged for an amendment which I thought could get maximum backing. Maximum consensus. But what we were really discussing was the risks of advocating a position that a referendum is the best next step now. The original motion provided a better scaffold for a meaningful discussion. I hadn’t seen that.

Sometimes it’s better to find the points of disagreement: about evidence, about tactics, about likelihood of particular trajectories.

There has been a friendly and constructive chat on the Facebook group and I am heartened by the quality of discussion.

When is it best to aim for consensus, when is it best to aim for clarity?

How do you know what progress would be until you air the views in the room? Understand the variables first, then describe them, then build meeting structures appropriate to the situation, then operate in these structures and reach conclusions. Rinse and repeat.

Meanwhile at work I have had a mental block on how to approach some issues. We need to have an informed discussion about our communications infrastructure for students (platforms, channels, controls). There is another area where we need to map out a new way of managing curriculum design and delivery based on data structures rather than documents.

What role should I play in this?

These are all approaches I have used in the past:

  • Describe the variables that constrain the future state including the feasibility, desirability, sustainability etc. Create a framework for making decisions inside.
  • Describe a future state in a compelling enough way that it would be hard to disagree with. Push disagreement to the margins, in the interest of a forced consensus. Best used when there’s a time pressure.
  • Describe a future state with enough clarity that people can disagree with me and refine/counter it. Thats a thesis, antithesis, synthesis approach. Start with a straw man.
  • Avoid the big picture discussion and tackle the future piecemeal (some people advocate this position because of how hard the other options are)
  • What have I missed?

The structure for discussion needs careful consideration. The method of decision making needs choosing consciously. Richard Jones “never ask users what they want” drew on business analysis techniques to suggest “Brexit was build on a poor requirements analysis”. Absolutely. Method is important.

My take home from the Labour discussion was that my instinct is often towards consensus but consensus isn’t always the most desirable outcome. I need to fight my instincts sometimes to allow for the uncomfortable discussions.

It’s interesting to reflect on when technical advances become the new normal in my own life.

There have been quite a few of these moments for me this year. So here’s a highly subjective run down of the new normal …

Phone payments

I’m a bit late to the party, but I am finally knowing the pleasure of buzzing my phone on a contactless payment device. No need to get wallet out and there’s an instant digital receipt. The haptic buzz feels like a key part of the experience somehow, which is notable in it’s own right.

Voice-controlled assistants

Alexa, Siri, “hey Google” … These have been around for a few years but it is now totally mainstream to see devices designed for the home. I’ve not got a custom device yet though I’ll probably give in. However our new SkyQ remote control has a voice input button and I’ve started to rely on that, commanding “Madam Secretary!”. Either way I can see the rise of voice is shaping our interface with technology. Jisc ran a challenge to imagine a screen-free digital learning environment, that will be interesting to follow!

Drones

Last Xmas we bought my eldest a drone. In November I won a drone. Doctor Who had an episode about delivery drones. And just before Xmas Gatwick was frozen because of a rogue drone. I’m just trying not to think of drone warfare because that’s terrifying.

Facial recognition

Facebook and Google have got pretty good at this. My mum started to trust Google photos earlier than me, and it even seems to manage the transition from baby to boy. I also have facial recognition as an authentication option on my phone now.

Fingerprints

I have a terrible memory for numbers. My phone has a fingerprint sensor and finally I can access my online banking! My son pays for his secondary school food with a thumbprint. When I travelled to the US there was a self-service security step with four fingerprints alongside photo and passport scan. For me, 2018 is the year this became normal.

So … should we be afraid?

So many of these technical milestones are surveillance technology. It’s the stuff of dark sci-fi. Privacy is dead. This short article by Tobias Stone Your Privacy is Over makes the point. My phone is a Chinese-made Huawei. I should be worried.

Perhaps I’m complacent, from my privileged viewpoint, but in the long view of history I wonder if privacy is a blip. In many pre-industrial human societies we lived in smaller groups with less potential for anonymity and clean slates. Our histories caught up with us. I think of George Eliot and Jane Austen when the truth about the handsome stranger eventually loops round via six degrees of separation. I don’t trust governments, or corporations, so I should be worried, but I don’t feel it, yet.

If I am totally honest with myself I am also numb to other fears. Climate change should terrify me, but I think I have innoculated myself from the panic I should be feeling. I am not in denial: I am just not fearful in an emotional sense. So perhaps I feel the same numbness to privacy concerns.

Honourable Mentions

I had a smartwatch a few years ago but have noticed that they have become much more common this year, also Fitbit type bands are everywhere now.

The Segway Ninebot I tried in August. Only knee high, it felt intuitive to control and I had my hands free. Not normal for me yet but I am excited to see where that technology goes.

Ditto I am excited about the future of electric and autonomous vehicles. I love that there was a Tesla sent into space. Not part of my life yet though so it doesn’t make this list.

For me, this has been the year for phone payments, voice control, drones, facial recognition and fingerprints becoming part of my everyday life.

What technologies became normal to you in 2018? And how do you feel about it?