Lecture Capture at Warwick

On 11th September 2017 I went to the Leicester University’s event “Implementing Lecture Capture: what are we learning?“. Lots of useful discussion, and I presented one of three case studies in addition to Leicester’s own story which is told well through their videos, (see event page).  Thanks to my colleague Jon Owen, Service Owner for Lecture Capture Service for his info into the presentation below.

Here’s my talk on Warwick’s lecture capture journey,

lectcappres1

lectcappres2

So what have we learnt?

Lecture capture is an educational technology driven by student demand

The Warwick echo360 pilot started as replacement for Camtasia Relay. That was a tutor-managed approach where they had to finish session early to process recording in time, and there was no standard way to provide recordings to students. The A/V Service Owner knew better options were becoming available and started the pilot in 2012 with the equally forward-thinking Chemistry Department (what is it about Chemists that make them technology early adopters?!)

Lecture capture quickly became a hot topic on campus and was in every Student Union education officer candidate’s manifesto since 2012.

The service got students attention, and we’re making it happen: a definite good news story for responding to student demands.

I forgot to mention this but Sarah Williamson from Loughborough reminded us that the withdrawal of the Disabled Students Allowance meant that HEFCE/BIS put the onus on universities to replace the paid-for notetakers with institutional lecture capture systems.

 

Lecture capture shines a spotlight on different approaches to teaching

I’m not just talking about the frequent debates about chalkboards!

Talking with academics about their use of, and concerns around, lecture capture highlights:

  • the balance of their teaching between large lectures, smaller lectures, seminars, group work etc
  • the extent to which they teach as part of teams or quite autonomously
  • the implicit content delivery models, relationship to textbooks, coupling between teaching delivery and curriculum, how often content changes, whether content contains commercially sensitive materials or possibly high value research material
  • attitudes to attendance – how much does it matter, is it monitored, do students have choices?
  • approach to discussions – do they happen in lectures? how do staff and students feel about being recorded? does it deter them from asking questions, is that because of learner culture or potential future use of recordings?
  • position on use of screens in sessions: do we want students to be looking at screens as well as the lecturer? Some academics happy with focussed screen use for small group teaching but not ok with use in lecture theatres

Lecture capture is a battleground for intellectual property and academic freedom

Lecture capture highlights staff concerns about:

  • terms and conditions of copyright ownership
  • surveillance and monitoring
  • team staffing models and job security

By its nature it is a central service with centrally-imposed policies,which in some institutions automatically attracts suspicion and dissent!

The technical landscape is complex

There are multiple teams involved in Lecture Capture, with different concepts of “rollout” and different support models. A/V specialists are used to providing time-critical responsiveness, VLE teams often need a few days or a week or longer to fully resolve a user’s issues. We have different but complimentary service cultures.

Integrating with the VLE adds value but also different dependencies and constraints: information structure and end-to-end workflows

Software infrastructure: video capture, editing, management and sharing is a confusing converging marketplace. Alongside echo360 we have planet estream integrated to the VLE for video management and streaming, and as we already have Turning Technologies Responseware we have an overlap with echo360’s Active Learning Platform. I know from other institutions too that this is a tricky space to manage and predict: overlap seems inevitable but it can look like duplicated spend.

Timetable-driven lecture capture is harder than it should be. My colleague Russell Boyatt has created some scheduling middleware between our cached timetable data and our lecture capture system, but the data itself is complex and the additional workflows required to handle a fluid timetable are challenging.

How much do the added value features and analytics get used? We pay for them as part of the platform, and its great to hear when people are using them. But in my experience they’re not used very much, and we’re not pulling them through to any kind of learning analytics data aggregation yet. And a seperate issue: editing. Do you encourage staff to top and tail recordings, or do you encourage release of raw footage and let students move the slider bars? If topping and tailing feels like a steep learning curve for staff, is it justified by benefits to students? I think raw footage is fine.

What do we do about transcripts and captioning? How do we optimise for accessibility and inclusion in an affordable and scaleable way? This is an area of fast moving technology development, so we need to keep a watching brief. But that alone could take someone half a day a week, can we afford to do that? or will we need to wait for lecture capture suppliers to have approved integrated suppliers at a reasonable cost on an on-demand basis with some authorisation involved from someone appropriate at the university!

It is an opportunity to think ahead

What will the best technical infrastructure be in 5 years time? As I said, its a complex technical landscape with many players, its hard to plan far ahead.

Retention. How long should we keep recordings, for the purposes of revision and audit, and how do cloud cost models change that? There was a useful discussion of this later in the day.  Basically many institutions retain materials for the programme duration plus one year. Which is usually four or five years. Many institutions started their lecture capture service in the last five years. So only a few people in the room had gone through the process of deleting recordings. Some institutions don’t delete. Because many institutions make recordings available through the VLE, lecture recording access is determined by VLE access. So the important time is when students lose access to the VLE and therefore to recordings: that is the de facto end of access. 

Lecture capture brings elearning teams into the world of capital spend and corporate comms, how do we benefit from the visibility? Leicester University speakers stressed how their lecture capture system is part of their Digital Campus and integrated into their overall investment plan.

Are we capturing normal lectures or trying to changing lectures? Are we promoting a service, developing a practice or enforcing a policy? This was one of the recurrent themes of the days discussions.

Closing Thoughts

My final slide was:

  • Build on the momentum to enhance the wider technology-enhanced teaching landscape
  • Amplify the student voice but explain the limitations and concerns
  • Recognise staff concerns but challenge them:
    • Attendance
    • Copyright
    • Bootlegging
  • Have an explicit policy to counter rumours and myths
  • Value the many roles that go into providing and supporting lecture capture
  • …. and don’t forget to switch on the mic!

 

 

A good event, thank you to Leicester for the invite.

There’s a huge amount of data and information on lecture capture practices but I wanted to highlight a few:

Barbara Newland’s data from Heads of eLearning Survey

Emma Kennedy’s post “Opposing lecture capture is disablist”

Matt Cornock et al’s work on student use of lecture recordings

WIHEA funded projects at Warwick:

 

 

 

Advertisement

Economics of learning materials

The oer-discuss list caught fire recently, on the history of reusable learning objects and open educational resources. If you’re not familiar with those concepts, look away now, this post isn’t for you!

A while back I wrote a paper with David Kernohan where we tried to give a narrative with a UK context: OER – a historical perspective . In fact I have been a bit obsessed with open content for many years but I have been silent for a while.  I’m going to jump right in here, a few themes I’ve been thinking about.

Use Value and Exchange Value

In the discussions about whether content has value, there is often a question about whether content can be bought and sold, whether it is “monetisable”. In marxist economics that is the type of value called exchange value: where a commodity can be exchanged for money. There is another type of value: use value.  That is the extent to which a commodity is useful. It is about its utility, not its cost or price (see below). I think most teaching resources can have a high use value both for primary use and secondary reuse, without that ever translating into an exchange value. They might be valuable but you can’t sell them.

Does that mean “content is free”?

I don’t think so. Teaching materials cost time and effort to produce. One of the arguments for sharing teaching materials is that of public service: we taxpayers/citizens pay the wages of teachers and academics and have some stake in their outputs being used as much as possible by others to benefit from the use value. Its the same line of argument as the “public paid, public should benefit” open access to research outputs. The cost model does not translate into a price model. The cost model is situated in a broader context of who paid for the labour of producing the content.

Enter open licensing as a different model of value

Instead of pricing teaching materials, open licensing focuses on getting a greater use out of the materials: a greater utility: a greater return on investment. Openly licensed digital content is also non-rivalrous (see pedagogy of abundance chapter by Weller ) so it doesn’t reduce its value when you copy it. Open licensing turns value on its head: the value is in use, not in exchange.

The learning object economy

This was the idea of a marketplace for reusable content. The last decade we have seen the maturing for apps markets and the ebay marketplace: enabled by micropayment models making small payments convenient for consumers and efficient for sellers. We have seen pyramid economics  meaning that enough micropayments can fund a product. The ebay for reusable learning materals never materialised, partly because this type of content doesn’t have exchange value.  In the meantime, the idea of an ebay marketplace gave birth to other models that connect consumers and sellers together. Perhaps there is a future for a freecycle for learning materials.

Collective commissioning

It is in seeing the education system as a system that we can really benefit from openly licensed teaching materials. Open textbook initiatives pay the content producers for their labour: they cover the costs of their production so that use can be free. Collectively commissioning textbooks is the purest illustration of this. Commissioning at scale. We need to look to kickstarter models of publishing, at “patron-driven acquisition” to scale up our collective commissioning. There are also models of funding the clearance of content of existing books: buying out the content in order to share it. It’s a bit like someone I know who buys a bottle of sambuca from the bar so that he can shower his pals with “free” shots 😉

What next?

If I understand the correctly, tools like mozilla’s popcorn maker  and open tapestry  allow you to remix resources without copying them. Online curation tools could be a growth area. What will they mean for creative commons licenses? There’s something going on there that I don’t understand yet. But I like the idea of not having to orphan content from its context in order to use it. I am still not convinced that many people “repurpose” content, and I don’t mind that: managing teaching materials is good and reuse of any kind is great. I have no big conclusion to this post, but hopefully it will make sense!

Bridging the worlds of OER and Open Research

I spent 12-13th April at the CETIS conference, with a focus on OER and open practice, and 19th-20th April at Beyond the PDF 2 conference with a focus on open access and open research. I feel very lucky to have a foot in both worlds.

The view across the bridge was raised  by both Suzanne Hardy and Nick Sheppard in the OER session at CETIS. After a conversation with Laura Czerniewicz, who regularly crosses the bridge, I decided that it might be useful to share some of my thoughts on how these two worlds relate. This post is more about supporting infrastructures than it is about changes to practice. It is about some areas where the problem spaces feel similar, even if they are not exactly the same issues. To the few people that cross the bridge I hope it reflects your take. To the majority who live on one side of the river, I hope it might encourage you to borrow a little more from your neighbours where it fits.

oresund bridge
Oresdund bridge between Sweden and Denmark (image sourced and stamped via http://www.nottingham.ac.uk/xpert/attribution  )

Slight differences

OER Workflows: a) frictionless sharing OER exhaust, sharing as a byproduct of teaching, collaboration. b) open development prior to use c) collaborative tools
Open Research: Workflows: a) data management as part of research, b) open notebooks c) collaborative tools

OER: Learning designs as a) a common language to develop practice, b) a framework for executing services
Open Research: Experimental designs as a) a common language to develop research, b) a framework for executing services

OER: Repurposing (I suspect this is a red herring and an unattainable goal)
Open Research: Reproducability (Carole Goble suggested this might be a red herring and an unattainable goal)

OER: student as producer, participative learning
Open Research: citizen science

OER: information about usage: paradata
Open Research: information about usage: altmetrics

And there is plenty where they have directly in common.

  • Creative Commons Licensing. Gratis/Libre debates, the CC non commercial clause and the role of publishers. I’d like to see both groups take note of the importance of machine-readable and embedded licenses because content in this distributed open ecosystem easily gets detached from its host page (see that chapter in Into The Wild). Ross Mounce pointed out to the BeyondthePDF2 conference that we should be improving embedded metadata.
  • Reward and Recognition for reaching out beyond traditional realms of academic practice, for crafting materials, for reviewing and commenting on other people’s work. Career risks taken by digital scholars.
  • An ecosystem approach: small pieces loosely joined rather than silos, interoperating pieces of the jigsaw, jorum and humbox, figshare and PLOS, giving people choices in how to assemble their services without locking them in.
  • Identifiers – Open Research world is ahead on this, with ORCID and assignment of DOIs, OER world should take note.
  • Provenance – the ability for a user to evaluate a resource: part of digital literacy, part of research skills.
  • Bundling linked outputs – Open Research world talks about metajournals, macropapers, nanopubs, OER world talks about curations. This is potentially a very fertile meeting ground – both worlds can lay claim to slidedecks, explanatory videos: both worlds aspire to the idea of the topic being at the centre of a whole range of outputs. A research output can be a teaching resource, a videoed lecture can be a research dissemination tool.
  • Blogging, tweeting, aggregating, data mining, the social graph of knowledge. We’re all talking about the public academic, what it means, how to surface the richness of the conversation, how to be an academic online.

Lastly, and most importantly: Public engagement. I talked about this a little in my piece on 21st Century Scholarship and Wikipedia and yet I was surprised by the number of mentions of MOOCs at BeyondthePDF2. I shouldn’t have been. Open access and open education may have forked away from simple principles but at heart they both share a founding principle: the opening up of access to what goes on in universities. They are not the same, they are rife with nuance and sometimes even passionate internal disagreements. But the energy behind the activists, developers and reformers is immense and I’d love to see a little more talking across boundaries. Take a little trip over the bridge!

Interested to visit but not sure where to start? Open Research developers, read a chapter of Into the Wild, and OER infrastructure people read the formats and technologies section of the Force11 Manifesto. I’d love to hear if anyone sees something from the other side that they can use.

Into the wild: Technology for open educational resources

Hot on the heels of my blog book, here’s the main course!

IntoWildCoverThis was the result of a 2 and a half day writing retreat “booksprint” last august with my colleagues/friends from CETIS: Lorna Campbell, Phil Barker and Martin Hawskey, facilitated by Adam Hyde from Booktype. Terry McAndrew wrote an additional chapter and we had lots of input pre-publication. So a real team effort.

You can get it here!

My Blog Book

When I was wrapping up my work at JISC at the end of 2012, I was keen to do something with my blog posts. Blogging for work had been a great pleasure, and learning experience, and I liked the idea of capturing my blood sweat and tears into something a bit more tangible than a set of urls. Luckily, I know Zak Mensah. I described what I was thinking about and he offered to create a ebook out of the posts for me. Thus this book was born.

The technical details: it was created out of the wordpress xml export of the posts I authored on the JISC digital infrastructure blog. Zak took the xml, edited it and ordered it as I requested it, created the visuals, added some wordclouds I’d generated, and provided it back to me in the two main formats for ebooks. He gave it to me ages ago but I got sidetracked and the time was never quite right to share them.

Focus

We decided to group the posts under the key themes that had emerged out of my work in digital infrastructure for learning materials:

Wordle: chapter_oerturn

Sensemaking: Conceptualising Openness

1. Rethinking the O in OER
2. The OER Turn
3. My Story of O(pen)

Sensemaking: Managing open content

4. OER: Metadata Now
5. Making OER visible and findable
6. OER and the aggregation question
7. Experimenting with the Learning Registry
8. UKOER: what’s in a tag?

Sensemaking: Use and Users

9. Making the most of open content: why we need to understand use (Part 1)
10. Making the most of open content: understanding use (Part 2)
11. Connecting people through open content
12. Sharing Learning Resources: shifting perspectives on process and product

Sensemaking: Licensing

13. Choosing Open Licences
14. Licensing Data as Open Data

We also included a section of Update posts in case anyone is interested in the chronology of the work JISC funded in these areas over this time.

Interested?

You can download it from my dropbox as epub HERE or mobi HERE. But read on …

I’m on a steep learning curve with ebooks, from this, also my work with CETIS on the book “Into the wild – Technology for open educational resources”, and my involvement in the JISC challenge of ebooks in academic institutions project. My learning so far is mainly “it ain’t as straightforward as you think”. So in case you do want to have a look, please note:

  • epub needs an epub reader. Plenty of readers are available for free: I have adobe digital editions for windows and aldiko for android. In my limited experience most PDF readers think epub is a broken pdf and freak out, so tempting as it is to assume you can open it in a PDF reader, don’t.
  • mobi is for kindle (though the route to get an mobi onto a kindle reader without being on the kindle marketplace is somewhat tortuous). If you get the mobi, follow the instructions on “manage my kindle” for personal documents.

I am indebted to Zak for his hard work and patience on this project. He did it in his own time and I owe him more than a few drinks 🙂

Obviously I would LOVE for folk to read my blog book, and comments here would be very welcome!

ambr wishes you a happy new year

2013 brings exciting new developments for my flagship service: ambr.

First of all you will note we have lowercased the product and dropped a redundant vowel in response to market research.

Secondly, ambr has always been open source: coders have been forking me on github for months. But now I’m taking it a step forward. All the utterances of the ambr community will now be licenced as CC BY. Please cite yourself CC BY ambr 2013.

Further to the removal of the API, there are now a range of approved integration channels. We also offer a paid bespoking service for all your ambr needs. Our UK-based call centre staff are on call 24/7 to facilitate you.

But the big news is that we are taking ambrAR into production. You’ll remember the launch of the R and D programme to much acclaim: the youtube announcement trended on facetwitterin. We’re now at prototype stage. We will be putting this project up on kickstarter. If ambr is a part of your life you can’t live without, keep it that way: crowdfund me. Soon you can take ambr with you wherever you go, whenever, whether you intended to or not! You can have the full ambr experience, seamlessly, online and off. You’ll forget how you ever lived without ambr (please read the small print).

On a more sober note, following recent unfortunate events, users are reminded to check their privacy settings under our revised T and Cs. As a new year gift to our UK members, Privacy+ is currently available on a 5 year subscription at only 14p a day.

Finally, I would like to thank you: ambr is nothing without its loyal users. You make ambr what it is.

Your futr is bright, your futr is ambr.

Happy New Year

A Thousand Words

Visual literacy has been a big theme for me this year.

A long time ago my very forward-thinking English A Level teacher, Mr Carr, taught us John Berger’s Ways of Seeing which gave me respect for visual skills. Yet I tend not to think of myself as a very visual person. I take a terrible photo, I probably prefer music to the visual arts, and I think I have a better memory for what people have said rather than what they look like.

Yet for me, 2012 has been the year of the visual.

  • I love infographics: information is beautiful has been a revelation for me: I’ve realised that I think quite spatially, so seeing information represented as patterns and shapes and relationships really works for me
  • Timelines work so well too, Lou McGill’s OER timeline is great, it is so much more accessible than the same story told in prose
  • I love data visualisations: i first met social graphs through Tony Hirst’s OUseful, and that partly inspired Lorna Campbell and I to commission Martin Hawskeys Visualisation of the UK OER programme
  • I like the way visual.ly works: another example from Wizard Hawksey was the ukoer vs score twitter analysis
  • I loved Suzanne Hardy’s suggestion (during a chat) that we should understand colour theory so as to read statistical visuals more carefully: that effective colour use sways the way we read graphics
  • I listened to a great podcast recommended, I think, by David Flanders, by Dan Roam. He suggests that we think visually and verbally with two different parts of our brain and that being able to take an image from verbal to visual and back again is a useful tool to hone the real meaning of what we are thinking about
  • I can’t tell you how much I love the animations from the OER IPR Support team, the one on turning a resource into an open educational resource, (and there is a new one on licensing open data, not yet launched* UPDATE: here it is!). I love the humour in the line drawings and the way they communicate some quite tricky concepts in a digestable way.
  • A while back I sketched a diagram and realised that rather than spend hours making it all proper and glossy it might be better to just take photos of my sketch as it developed and use it like that. That became my work post on connecting people through content.
  • I’ve also wanted to do more polished images though, so I have had a good play with easel.ly, which reinforces how much I have to learn.
I used easel.ly to make this:

C21st_Scholarship_and_Wikipedia title=
easel.ly

Brian Kelly recently wrote about the strong feelings people have about infographics, responding in part to discussions around the one I made, above. Various people said it wasn’t an infographic. He concluded that:

The accompanying image does, in the depiction of the education level of Wikipedia users, a certain amount of ‘infographical’ information, but the remainder is a poster. I think we can conclude that there are fuzzy boundaries between posters and infographics.

This is probably, however, less fuzziness between those who find infographics useful and those who dismiss them as marketing mechanisms for presenting a particular viewpoint, but hiding the underlying complexities.

In his post Brian referred to an incident where two of my favourite people, Tony Hirst (Open University and amongst other things, maker of social graphs) and Mark Power (JISC CETIS mobile web expert and a photographer) were snapped earlier this year having a mock argument about infographics 🙂

More seriously, I think there is a really interesting technology story here too.
It’s very fashionable in tech circles to sneer at QR codes. This tumblr did make me laugh: pictures of people scanning QR codes (the inference being, of course, that no-one uses them). Regardless of whether QR codes are useful of not, I have a theory that the real legacy of QR codes will be that that have driven image recognition apps on my mobile phones. They have connected the marketers, the hardware, the software and the smartphone user skills that are required for a richer visual technology stack.

And enter the news that Facebook was buying instagram. Photographs are rich in data, about what people are wearing, eating, reading, making … If I was a company trading on data about consumers, I would want to get access to photos too. Instagram is trendy, and people use it on  their smartphones, so there is plenty of geo-tagging too. How long before our photos are scanned for logos: car brands, soft drinks, fashion labels? The amazing thing is that as well as those brands being placed in films and TV to convince us to buy them, the marketing people instead will be analysing our photos to find out who their consumers really are. Images are data, and speeding up the ways to decode, tag, map, correlate that data is big money. “A picture is worth a thousand bucks”.

As an aside, I’ve said on this blog before, I am not opposed to the Facebook business model, as long as we understand the trade-offs we’re making. I’m pointing out the instagram story because I like to understand the way technology things develop. This is not a rant, it is an exploration. I’m not really interested in comments about the pros and cons of facebook and instagram.

Back to talking of images as data … I find it really interesting that at the same time technology is/will make it possible to derive text from data, we are also seeing text being mined and represented as visuals. Text becomes data becomes images becomes data becomes text. (“A thousand books is worth a picture”?) Technology is gradually going to enable much fluidity between formats and I find that really interesting.

All of this fell into place for me this evening reading this thought experiment: essay on a universal language of images by Trey Ratcliff. Imagine the human race had never started writing things down, and instead developed photographic techniques. I recommend this essay to you.

As we’re sliding through the second decade of the new millennium, something new is happening. We all have cameras in our mobile phones and taking a photo of something is far more efficient than typing a sentence about it …

As our streams become more about imagery than words, all of us will evolve a new sense of visual literacy. It is important to note that imagery is not better or worse than text — it is simply different …

Billions of people now have a totally new way to communicate, and we will all discover this new visual literacy together. Now, finally, our ideas and thoughts and feelings and stories can effortlessly travel across borders, cultures, and time …

If that doesn’t make you curious about the way the technology is developing to support visual literacy, I don’t know what would.

The Git and the Pendulum

(This has nothing to do with Edgar Allen Poe, I just liked the pun. Sorry to disappoint).

Having worked in technology and education since the late 1990s I’ve witnessed several swings in what is deemed to be common sense or received wisdom of “what’s best”. I’m starting to notice patterns, and I have a sneaking suspicion that this is nothing particularly specific to the field I work in, but a wider pattern of how fields of practice evolve.

I’m going to be lazy and let you look anything unfamiliar up on wikipedia.

To start off, I love the analogy in Wittgenstein’s semantic river.

Philosophers please forgive me my inexactness, I merely want to sketch out how this concept informs my thinking, not to try to describe it or critique it.

So …

At the top of the river is the fast flowing water of everyday lived experience. Below that the silt, the fluid mud that rolls along the river bed, slower than the water but faster than stones. You can see and touch the silt, it starts to get tangible. Below that the stones, each one a thing with boundaries, each one describable, but slowly moving with the direction of the river. Then the rocks, moving imperceptably slowly. His analogy is that this is what meaning is like, and for him, language is the meaning. Big concepts feel like rocks, unquestionable, but in truth, all is fluid, all is effected by fluid, its a question of time. This past year I’ve been using the word “churn” a lot, and for me this is often what I’m thinking about.

Sidenote – if you want to really blow your mind, there was an amazing programme a while back about waves and how in some ways all life is waves. Its just a question of time and distance in space. I must watch it again.

Next concept: Dialectic

As in … thesis and antithesis. One person says I think A, another says no, not A, its Z. In a process of discussion, the choice of A gets shifted a little for B, Z gets swapped for Y … and the position that comes to be discussed is somewhere between … G and P. It’s not to say that all ideas reach consensus, but that there are forces at play that mean ideas change in relation to each other, and, I think, people change their positions in relation to ideas.

That’s what I mean by the pendulum: there is a natural swing between preferred options, the options backed by the majority. A good example is the pendulum between centralised “vs” distributed technology, local “vs” outsourced technology expertise, etc.

Physicists – this is where my lack of hard sciences show. I know that a true pendulum settles in the centre, but please forgive me some creative leeway. (I’m actually a little scared of how far this whole post could be ripped to pieces!).

Sometimes the dialectic is up at one end of a discussion. So it’s between Z and T: a small but hotly contested arena of debate. An example of a debate up at the far end of a spectrum is the debate between gratis “vs” libre in open source,or  free “vs” open in open educational resources . It’s fascinating to watch that the question of gratis vs libre is starting to gain weight in the space of open access to research papers, taking the shape of “what sort of creative commons licence should be applied to a research paper?”.  By watching the trajectories of other “opens”, I predict that although it hasn’t been a big focus, it will start to become more important.

It’s interesting the way that a tussle within a short strech of the pendulum, say between P and S, can be really important to progress of a field, but to the folk aligned to the left of M, it looks like silly in-fighting. I fear that the political left sometimes confuses the internal discussions with the external discussions, and could do with a bit of the brash confidence of the right in the pretence that there is common-sense position. Folks who believe that rocks don’t move much can too easily interpret the movement of stones as a lack of a bedrock.

So … a case study in this is the open source movement. And the promised Git of the title. I mentioned above the gratis vs libre concept. I think the big pendulum has been swinging from the A as “economically foolish” to Z as “economically sound”. Meanwhile open source has branched off into a diversity of approaches, from purist to hybrid. At the purist end we find git hub.

I am not a programmer, but I think understand the concept of github. Don’t just share the source code like on sourceforge: host the source code in a shared place where it can actually be used/played/run. Taking the concept of open source one step further to where it can be worked on together. Github is clearly an amazing thing, but to assume that it is the only trajectory of open source would be to misunderstand the way fields develop.

[postscript prompted by the comment from Graham Klyne: I refer here to a tendency to see the live editible source model as an answer to everything. That tendency doesn’t necessarily come from users/advocates of github but of people like me who grasp what it offers. I call it “githubification” and I mean that not as a negative comment on github, but as a caution that a borrowed model cannot necessarily be applied to a long standing problem space like sharing learning materials, and magically work. I think that sharing learning materials is a socio-technical issue, like sharing code, and that though the technical solution might look the same, the sociological/human factors might not be.]

So there we have it. No doubt riddled with inaccuracies and misunderstandings, but this is my take on the Git and the Pendulum.

Openness in universities: the sunlight effect

Excited to be heading off tomorrow to the Flossie 2012 Conference.

I’ll be speaking as myself, sharing an overview of different forms and characteristics of openness in universities. For me this is a chance to meet with other women working in open tech and open culture, and to reflect on the sorts of initiatives I’ve been working on for nearly a decade.

Here’s my slides

Looking forward to it!

Hunches on web directions

This is a bit of a risky post. I enjoyed the NMC Metatrends in educational technology, and this evening, a post on “what I’m obsessed about“. I usually try to include links and  explanations but I could take forever to prep this. Instead, I’m going to publish it quick and dirty, with just a few links, and maybe follow up with something more polished another time.

So, these are some hunches I’ve had about where the web is going over the next couple of years. It’s informed by lots of things I’ve read, and conversations with lots of clever folk developing and analysing web stuff (see end).

Here goes …

Orchestrating
This has always been part of software development, but now this is coming centre stage. And its not always about systems interoperating: its about standards (open and proprietary) for content/data that allow it to flow between places on the web. Characteristics of this orchestration of the flow are:

  • Migration – e.g how easy it is to migrate blogposts
  • Syndication – particularly rss
  • Rules – yahoo pipes led this for RSS, but now we have if this then that 

The interesting thing is that end users are starting to have this capability: we don’t have to code to orchestrate.

Hybridisation
I had thought that html5 plus e-pub plus mobile versions plus apps would lead to convergence: more samey platforms/tools. But then i had an aha moment: because of the orchestration trend, I think i’m seeing more hybrids. Say there are 15 common activities, the hottest tools won’t do all 15. They’ll do 2-5 of them really well. So people will create bespoke paths, unique orchestrations, depending on their key activities.

Marketplaces
Its been said before but I think its reaching the mainstream. The barriers to buying, downloading and installing software have never been lower, through App store, google play, amazon, facebook apps etc. It means people are becoming more comfortable with trying and discarding software. “There’s an app for that” (on android too) is fun, its a sign of identity. We use instagram like wearing ray bans: apps are brands. In another way, hopefully that should improve quality:  a more responsive supply of software. The agility is in the market itself. I can’t believe I’m saying the market should help improve the product. But what’s the cool factor in kickstarter? Crowdfunding. A healthy app marketplace would have some of the same effect.

Zero, cheap, premium
The orchestration trend plus the marketplace trend means that we’re only starting the beginning of freemium. Just how we change our shopping habits, I think we’ll experiment with mixing and matching apps. Recipes will mix free and costed, some recipes will focus on free, some on budget products. We might even get go compare models of orchestration recipes we will keep tweaking. We get bored, we try a new freemium product, we commit, we cancel the other subscription. Also I just heard the concept of “frugal innovation” today and it resonates a lot: cost matters.

Free via social data
Some of the giant infrastructur-ish services will remain free by trading on our social data: google or facebook or twitter. This is aka “paradata”, and my interest in it is because perhaps it can fund the web remaining free (as in cashless) at the point of use. Which I don’t have a problem with, personally (see my earlier post). If you haven’t got your head around the adage that “if you’re not paying you’re the product” read this. Oh, and to make this work, we’ll see a lot stronger pull towards identifying ourselves through logging on through one of those services. In the future, the business funding the internet knows you are a dog (see quote, 1993) and what’s more, it knows you favourite pedigree chum (rule of the age: verb every noun). I know some think the big services will disappear and the bubble will burst or whither. The question is when: next year, 5 years, 15 years?

Everything is data, and data is good enough
The services around twitter and facebook are good examples of how our status updates are mined for space, time, sentiment, URLs.  To borrow the phrase from open data folk: data mining is “good enough” for marketers to fund. In academic circles with its attention to accuracy, these approaches are sometime perceived as not yet mature enough. But data doesn’t need to be perfect. Its all about the patterns. Texts, images, sound will all be mined for patterns that can reflect our semantic universe back to us. Services like format shifting, translation, text to speech are going to get a lot easier to orchestrate, and with low cost options.

Smart licenses and smart remix platforms
Machine-readable copyright licences are going to develop further. The key to that is that providers and users both benefit from the flow of authorship/ownership and permissions data, no one loses. In one conversation with one very clever guy a couple of years ago we coined the phrases “beautiful attribution and elegant citation”: basically it’s going to get easier, even automatic. And these will co-evolve with smart web remix platforms. Pinterest took the big risk. Those that follow might be more respectful of rights. The services / standards / tools / that mix content together without orphaning them will grow. The remix platform that also handles rules and accounts in an elegant way will be a key milestone.

So, they are my hunches. In other words, If I was a venture capitalist, that’s what I’d be investing in. I’m not, obviously. Otherwise I wouldn’t be sharing my hunches with you 😉 I’d be very interested for links and leads on fleshing this out. It’s all a bit meta. This is 2012 and meta is the new black.

[As well as online sources, ideas in here have come from conversations with Brian Kelly, Tony Hirst, Nicole Harris, Sarah Currier, Peter Robinson, Sheila McNeil, Paul Walk, Mark Power, David Kernohan, Phil Barker, Andy McGregor, Ben Showers, Doug Belshaw, my family … and, well, I’m lucky enough to know a LOT of clever people. So, lots. And sorry if I left you off.]