The third week of the LAK11 course had as topic 'semantic web and linked data'. I don't know if it's the topic or if it is a normal behavior in a MOOC course in the third week, but I felt a dramatic drop in the activity level. The volume of discussions in the forum for example dropped to a third of the weeks before. Or is it that business and regular work really starts to eat up all attention again near the end of January?
I enjoyed brushing up my understanding of semantic web and how researchers and standard bodies are (trying to) shape that. Semantic web is a kind of Valhalla, and I remember talks about it back when I was at university. But what can I buy with semantic web today? It has so much potential, but where are the wide scale applications and tools? It all seems so experimental and embryonic, especially compared to the technologies of last week (big data) that are already in full production. It wasn't until I read Inge's post that I started to make sense of it all. (eg her suggestion that curriculums can redesign themselves!)
The notion of 'linked data' seems like a 'light' or sub-set version of the eternal dream that is semantic web. And one that works already for us today. Whether via semantic links where computers make sense of the web, or whether via linked data or simply tags, linking islands is a VERY good idea in the network age.
What did I do this week?
I've read through all the assigned readings. I saw web-inventor Tim Brenners-Lee's TED talk where he wants our data and his article on the semantic web that started it all. I like his idea of distributed trust systems and the 'oh yeah' button next to any data prediction that explains how the system came up with a certain conclusion.
I also stumbled on this YouTube video. The interviewed guy compares tagging to 'semantic web light'. It seems that whereas the complexity of semantic web and the need for people adopting that specific language in their data stands in its way of large scale adoption at present. Tagging is so 'light' and simple that it has already proven its value. Simple things rule! I can't imagine any half decent web app without tagging anymore...
Some articles got very conceptual near the end, and I was wondering if we need all that semantics. Google translate for example works fine without understanding anything.
I also played with Freebase and Google public data.
I could not replay any of the illuminate recordings, as I couldn't get the speaker sound to work. Is it just me or did other people face that difficulty?
And I just looked at what NodeXL can do, but did not play with it myself. My company has decided in all its wisdom that we don't need Microsoft Excel anymore, and can do anything in Lotus Symphony, so I can't run Excel add-ons.
And before realizing I was skipping to the topic of next week already, I played around with some visualization tools. Here are some images from linkedin labs and TouchGraph for Facebook (also works for Amazon, Google) . Also played with ManyEyes.
Last but not least I added some more use cases for learning analytics in corporations in the forum. Who's next?
What sense did I make of it all?
The lower levels of semantic web are shaping nicely, with the 3-way vocabulary, ontologies etc. The upper levels of trust and proof and dealing with ambiguity are the most interesting for me, and that is where there seems still a lot of work. (One of the readings talks about 'claims' the system finds, not 'facts'.) Or not. Because semantic web is just a language to describe things and their links, and trust and proof are assigned by context and judgment of people. We have different ways to look at the world, and therefore to assign meaning to stuff and links. I do find it very interesting to see how far we can 'predict' the value of evidence claims. It also links back to my book on competent people. Is there a clever way to assign proof of your competence? Can a competence crawler seek the web for claims of your competence in domain X and grade the proof of that?
Semantic web is ultimately about computers assigning meaning via links. And meaning is always embedded in context. Hence, this research might help learning professionals out in putting learning content and interventions in context too.
Ultimately, I'm a big fan of linking and providing relationships between data and people. It is the underlying reason why I like my definitions blurry. In the network age, everything gets linked to everything. I believe that the next wave of innovation and prosperity will not come any longer from specializing even deeper in the silo of our research area, or corporate department, but will come from linking and integrating with 'the rest'. In corporate learning that means integration with the other 'people' functions of HR, and aligning with business results. Away with the islands, they stand in the way of progress! Blurry definitions are a curse for most people, but for me they help to easily stumble upon other people's territory and making the link.
Back to learning analytics, I see great potential for semantic web/linked data to better 'serendipity learning', as those tools will go out and make you stumble upon stuff you did not know was out there.
And as for visualization, I hope those tools find their way out of labs and separate tools to mainstream fast and get integrated in Facebook, Linked-In, Moodle and other community tools as standard features. I think we are ready for that.
No comments:
Post a Comment