Recall how I have been neglecting my blogging due to just too much work these days? Well, the day when the crazy busy stops is today. After working non-stop with some pretty lofty crazy funky Topic Maps stuff, getting some traction on the potential and seeing some light at the end of a very busy tunnel, it's a bit odd for me to announce ;
My project has been canned for strategic reasons, and I'm out of a job.
Well, now, things move quick around here; the one moment I'm crazy busy doing crazy stuff, next I'm crazily trying to figure out what to do next. If anybody need a brain for hire, this is your chance!
The stupid thing is that I live in the Illawarra / Wollongong area, an area renowned for lack of jobs and being chemically free of IT and innovation (the local government tried to fix that by calling Wollongong 'the city of innovation' and build a huge innovation center that's too expensive for any upstarts to get into). I guess this is a good time to link to my LinkedIn profile, no?
However, I'm not bound by that we must be living here. I'll go anywhere, especially interesting places, interesting countries. I've got three little kids to worry about so I can't jump too quick, but I certainly can hop around New South Wales and possibly Canberra, and sort out where to live after that. Or I can telecommute, or fully work from home. I'm not fussy at this point.
I guess this is where I tell you what I can do; through my 20+ years career I've chewed over and gained specialist expertize in just slightly too abstract things, like Topic Maps (a fantastic technology that doesn't exist in Australia, but obviously should and you should hire me to teach you how to kick arse!), XSLT (a fantastic technology far too few use and truly understand, but you should and you should hire me to teach you how to solve complexity with simplicity!), AI (a crazy notion only researchers use, and I'm not a qualified researcher, but you should hire me just because you value solutions over credentials!), library technology and science (a dying cluster of concepts, but you should hire me to get you up to speed and make you thrive and survive!), epistemology (don't even know where to begin with this one, but you should hire me because I'm telling you!), and training / speaking / coaching on a mixture of the above (been on the conference speaking circle a bit, and held tons of training sessions and seminars). I guess more conventional stuff I can do out of the box is complex PHP (lots of that the last few years) and front-end stuff (HTML, CSS, JS/JQuery, the usual suspects) and middle-tier stuff (JSP, JSF, XSLT, a bit of Ruby, a pinch of Python, Perl if I'm crazy, ASP / .Net to some degree) ... heck, I used to be a proficient C developer in my early days, and there's really no language I can't meddle in. Oh, I know; I'm big on architecture with SOA concepts and REST technologies, although I'm not one of those who think the UDDI is great and promote ESB or any somesuch bundle of WS-* wherever I go, even though I can do that, sure (but I'd recomend a RESTful approach for a simpler and richer infra-structure). I've created CMSes of various kinds, frameworks (event-driven, load-balancing, etc.) and APIs, query engines, Topic Maps engines and KM platforms all from scratch, so it feels a bit silly to explain what I can and can't do. There's probably nothing I can't do as such.
My strong side is being creative, though, to see possibilities, opportunities and plan for that in mind. I'm one of those who embrace somewhat agile methods without being an agile-head (coaching is, in fact, a satisfying way of getting things done), and have a special interest and knack for project management. If you need a project manager with a coaching style in areas of knowledge management and using Topic Maps, well, you know who to call. :)
Anyway, in the off-chance that you think I'm the guy to help you out, you can reach me all the time at alexander.johannesen@gmail.com. And if you've got pointers, point me there, or point your pointers here. It's a tricky situation when it happens this abruptly, but I took a risk going into this project, so I knew this was an option. Let's just hope it pans out alright. And thanks for any help you can provide in this crazy time. :)
28 May 2010
21 May 2010
What I did last weekend ...
I've been meaning to blog this for a while, but work and, uh, work got in the way. However, as a band-aid for low blogging I bring you a special edition of "What I did last weekend!"
The week before (Tuesday? Wednesday?) I was out on my lunch-time walk to get much needed coffee (a poor substitute for Masala and / or proper Chai tea), when I was coming back up the hill I noticed something interesting in front of me ;
On a trailer of a big four-wheel off-road monster of a car, a big white boat similar to the modern fishing vessels around down by the harbor was attached, except it was all white with a big blue logo on its side which read "Australian Museum."
I was intrigued, and I wondered to myself if they were lost or something, accidentally taking a wrong turn to somewhere exciting and had to turn down our street to get directions or something. I had stopped, and while pondering these things, a lady came out of the rental house it was parked in front of, and we started some idle chatter about why this boat (with all its magical capabilities) were here of all places.
Well, it turned out that the Australian Museum once a year goes out on a collecting excursion to collect samples for both the collection and for scientists as well. I showed some vague interest in the matter (I think I fell to my knees, pleading for insight, frothing at the mouth), and was invited in to have a look at their setup. I got up, and skipped into the house like a little girl, giggling and grinning all the way.
I looked at their photo setup, talked some technical stuff, and then moved downstairs to their lab setup. They were a bunch of people (6 in total, I think) of about my age and up, and I had a quick glance and a lovely but quick chat with them as I mentioned that I'd love to show my kids, and we were all invited back on the weekend.
That's what I did last weekend; I took Lilje and Grace down to their lab, and we got stuck talking to the most wonderful bunch of scientists for about 2 hours. They showed us their findings, specimens, the worms and other invertebrates, plants, shrimps, urchins, you name it, all sorts of little critters who mostly got put in formalhydrane for an all expenses paid trip to provide us with knowledge.
The kids loved it, and I loved it, and I wish to thank the team and the museum for brightening our day and for shining light into dark corners.
The week before (Tuesday? Wednesday?) I was out on my lunch-time walk to get much needed coffee (a poor substitute for Masala and / or proper Chai tea), when I was coming back up the hill I noticed something interesting in front of me ;
On a trailer of a big four-wheel off-road monster of a car, a big white boat similar to the modern fishing vessels around down by the harbor was attached, except it was all white with a big blue logo on its side which read "Australian Museum."
I was intrigued, and I wondered to myself if they were lost or something, accidentally taking a wrong turn to somewhere exciting and had to turn down our street to get directions or something. I had stopped, and while pondering these things, a lady came out of the rental house it was parked in front of, and we started some idle chatter about why this boat (with all its magical capabilities) were here of all places.
Well, it turned out that the Australian Museum once a year goes out on a collecting excursion to collect samples for both the collection and for scientists as well. I showed some vague interest in the matter (I think I fell to my knees, pleading for insight, frothing at the mouth), and was invited in to have a look at their setup. I got up, and skipped into the house like a little girl, giggling and grinning all the way.
I looked at their photo setup, talked some technical stuff, and then moved downstairs to their lab setup. They were a bunch of people (6 in total, I think) of about my age and up, and I had a quick glance and a lovely but quick chat with them as I mentioned that I'd love to show my kids, and we were all invited back on the weekend.
That's what I did last weekend; I took Lilje and Grace down to their lab, and we got stuck talking to the most wonderful bunch of scientists for about 2 hours. They showed us their findings, specimens, the worms and other invertebrates, plants, shrimps, urchins, you name it, all sorts of little critters who mostly got put in formalhydrane for an all expenses paid trip to provide us with knowledge.
The kids loved it, and I loved it, and I wish to thank the team and the museum for brightening our day and for shining light into dark corners.
6 May 2010
Of models, frameworks and the paradigm shifts we don't see
I need desperately to talk about models. I hunger for them, I need them, I love them! They are pretty, of course, arousing us and making us do silly things like buy stuff we don't need, or fooling us into buying something we think we need but we later find out that we're a bit stupid, we were mislead by the models winks and beautiful attire. However, they are often more than just skin deep, more than just the abstract notion of our perversion to objectify everything we see. Confused yet? These models are not found on the cover of glossy magazines.
These models are everywhere. They are how the brain takes a group of concepts - be it physical objects or knowledge nuggets in our head - and plonk them all into a grouping of sorts, and draw lines of meaning between the new group and the old it knows about. Together they form anything from thoughts to language to the reasoning used when voting. Between person A and political issue B we draw a relation type "opinion" with opinion C. A few fuzzy million of these, and we can pin you down pretty well.
Where do models live in our various systems? Well, they live in language, for the most part. And this is not some cryptic ballyhoo I'm inventing here, so I'll demonstrate with the humble book. Let's look at a description of a book in the foreign and mysterious language called Eksemelle ;
The book
<book>
<title>Some title</title>
<author>Clemens, Samuel</author>
<pompador>In a nutshell</pompador>
</book>
"book", "title", "author" and "pompador" mean something to someone. To most people who understand the English word "book", it means those rectangular objects made of dead-trees that's got some kind of letters and / or pictures in them. They are, as we say, semantic in that they have meaning to those who knows what those words mean. Here's the word "book", and it has a definition of sorts; that is explicit semantics. But then there's the implicit semantics of what those words mean in terms of this being an XML snippet, maybe from a specific bibliographic format, maybe with an even more specific XML schema (unless the XML schema is made explicit through something like a DOCTYPE). And finally there's the tacit semantics in the space between the model, the framework, and the people who work with it. Let's explore these shark-infested semantic parts.
How are they semantic? In what way are they meaningful, and to whom? Well, let's start with who. No model is really valuable unto itself; there's always some external framework that understands (and appreciates?) the model, some "thing" that looks at and interacts with that model in order to make it useful. Indeed, a models usefulness is often measured by how the semantics of various models match up. For example, the usefulness of the MARC model can be measured in terms by how well it matches the model librarians work with and need. Needless to say, the usefulness of the MARC model can also be measured how useful it is for a bricklayer, but more on this later.
So in what way are they meaningful? Every time we talk about models, we are really talking about a translation that's going on between models. The model of the words in this blog post is translated first from my brain and into the model used by my blogging software that uses some models of the Internet and computers and complex electronic networks and systems, and then translated to the model of your brain. We take a piece of semantics, and we try to make the transition from my brain to yours - through a multitude of other models - as smooth as possible. Have I succeeded so far? Does our models match up a little, much or not at all?
Things are meaningful when the models match up, when there is little or no difference between them to make understanding the thing in one model hard to understand in another. An example of a semantic mismatch in models are indeed the semantics of the 245$a field for a bricklayer looking for his bricks.
Constraints on entities
In the past I've talked about how models are constraints on entities, and this still holds true, but it needs a bit of clarification to make sense. First, what are entities?
Well, entities is one of those words that we can make to mean pretty much anything we like, but let's take a simple view of entities being "things you can talk about", similar (or exactly the same) as subjects in Topic Maps, "concept" in philosophy, or, to the layman, "things." Anything you can think of. Any subject, fictional or real, physical or surreal; a boat, a thought about the Moon, the idea of North, the concept of the number 1, the Eiffel Tower, MARC, a MARC record, an XML representation of that record, the book the MARC record represents, a physical book, the relationship between the book and the abstract notion of a book that the MARC record represents ... oh, the possibilities are - truer than anything! - endless.
Between the entities, in the cracks of our language and our understanding, flow their relationships. They are part of our model, those notions that give our entities a meaning of sorts, what makes the semantic;
"This book" was written by "this author"
Look at what we found in the cracks; "was written by." Isn't it grand? This trifecta is known in the Semantic Web world as a triplet, basically a tuple with a subject - predicate - object structure. Make enough triplet statements about a thing, and it grows in semantics. Let's look at our book ;
"This book" is our subject
"was written by" is our predicate
"this author" is the object
Let's make another triplet statement ;
We now have a model a bit like this ;
"This book"
Reflections in the mirror
The models we have in our heads rarely perfectly match the model we're interacting with. The model in my wife's head is not matched well with my own model in many ways, like shopping, views on the value of shoes, the model of interacting with people (she's the one with a model closer matched to the generic likable model shared by most social and nice people) and the concept of geekery (of which she has none). And she is a person that's quite well matched in general. It only gets worse from here.
Now imagine the semantic distance between me and a computer system I've designed. I can work with it. I understand it. I can get my job done. However, I show my perfect model to a customer, and they immediately start picking it apart, pointing out how my model doesn't match their (or their individual) model. How could I have been so blind?
Here's a little secret to why usability and user-centered design works; you test your model against most other people's model in order to get to some model that you all can reasonably match with. When you don't test your model against the users models, they are bound to suck.
Models are reflections of how we humans see our world. The MARC standard most certainly reflect how the librarians saw the world, how it matched their needs and wants. My programs are often a reflection of how I see things. Your browser is a reflection of its developers model of how your browsing should be. This blog post is a reflection of me. Your computer a reflection of some manufacturer. The operating system a reflection of yet more developers views.
Sure, we try to create standards that either try to reflect some common model of things, or at least a common language in which to describe this view. However, it is terribly difficult to come to models that match well across so many thousands and millions of possible models. I'm tempted to almost say we need some constraints of commonalities on our models in order to create semantics to better understand our various models, to better agree and share them.
Where models sleep at night
Where do we find the actual models when we peer into the computer systems we use? Somewhere, surely, is that model in which we try to model to our own model rests, somewhere in there amongst the code and the interface we can point to it and say, "There!"
Mostly we can't, however there is one place where you'll find a lot, one place so holy in computer science that people dedicate whole careers to dealing with its innards; the relational database.
I need to speak about the relational database a bit because, well, so much of most computer systems out there a) have one, and b) store much of their model in there. Yes, there's alternatives, and more and more technology pop up that tries to do different, but let's be realistic about what 99% of most computer systems use, those RDBM systems. Let's have a look at a small corner of one ;
What we see is tables; columns of fields, rows of entries. These form the entities of these models. The example given even uses a tricky third table to function as a lookup-table between two tables of entity data (I won't go into too much technical details here). To get data in and out of these tables, we use a query language like SQL to say things like this simple example, "get all fields (with their rows of data) from tables A and B, where table A has a field 'book_id' that matches its value with a field 'id' in table B, sort it by the field 'title' of table A in descending order, and give me the first 40 results."
One SQL statement that matches for that is a semi-cryptic "SELECT * FROM TableA,TableB WHERE TableA.book_id = TableB.id ORDER BY TableA.title DESC LIMIT 40", or some other variety (there's tons of different ways of saying the same, with or without JOINs, sets and filters).
Let's look for our model. First of all, it's the titles of the tables, the titles of each column in them, and lastly the contents of the rows. Notice here that this is three levels of semantics nested within each query, and you need to know all of them to make reasonable statements. But what else is in our model? Well, it's those pesky relationships between things, the constraints on our entities to make the meaningful, and they exist in the query itself, in your application. Think about that for a second, think about where these things go ;
The table, columns and rows go in the database system. The querying goes in your application. The user interface (which we haven't even dug far into but is a wormhole of complexity in its own right) that interacts with you also sits in the middle, in the application. So there's a model in your database, and you reconstruct that model in your application (otherwise, how could you query the database if you don't know what it looks like?) and yet do further things that are not embedded in the database (and so, you've got a super model ... fun with pun!), translate further between your users and the user interface, translate back into the application which translates back into the database ... ah, what fun spaghetti games that makes.
Surely RDBMS and SQL is better than other alternatives? Well, it was for many years, and it was a way to solve the problem of doing things in far worse way, for sure. But we were also under the constraints of computing power in which we couldn't just do the right thing and still get a computer that gave you answers in time for Christmas. It was a compromise between the need for any answer to all your data, and that of a practical lots of answers to most of your data.
But this analogy can be taken further, especially into the world of MARC and the libraries. MARC itself was also designed with lots of peculiar constraints, funny rules and structuring, and then with the added AACR2 (and earlier friends) rules for manual data integrity, it surely reflected the best of breed at the time, reflected what they wanted and it was something that matched their onset. So we got the model of MARC (I've called it the culture of MARC in the past, but a model suits just as well) in MARC itself, in the rules we add to it, to our ILS, to our OPAC, our catalog, our acquisition, our collection management, everything. And then the model of MARC is everywhere, it even starts to dictate our human processes.
Time flies
But then time flies, and the world changes, sometimes unexpectedly and the harsher if it is. Just like there's a lot of push for alternatives to RDBMS these days because today and tomorrow is somewhat different from yesterday, the same with a push in the library world to go from MARC / AACR2 to something more like FRBR / RDA.
However.
When you create a model of the future you need to make sure it is future proof. You have to make sure that not only does the model match what you need right now, that it reflects the funky stuff you wanna do today, but it must be able to deal with the future. If it doesn't, well then you are going to have to go through the pains of changing the model sooner rather than later.
Here's a few thoughts on that process in the library perspective. FRBR was designed 15 years ago, when the world was sloooowly waking up the the new fresh brew of the Internet and technology. Take a good look at what the world was like back then, especially paying attention to the fact that books were still the main container for knowledge and information, mobile phones did nothing more than make calls, Internet devices were practically unheard of, no eBooks, no iPads, no eEducation, no eGovernment ... for Fraggs sake, the evil monster of Netscape was still alive and doing harm! I remember still writing Netscape specific code to deal with its quirks. This was a time before we all stopped hating the dying Netscape and focused on the evils of Internet Explorer instead. can you even remember back to that time and think that the world might be in your back pocket or pack in the shape of an eReader or iPhone, or that Amazon (in its infancy) would have a full-blown infrastructure including their own eReader with tons of titles a click away? Or perhaps even more profoundly, that WikiPedia would lead the way to the disjointed revolution of knowledge and information? That we would be twittering? That all higher educational institutions would move towards ePresses? That paper journals would turn to online journals? That the pricing models of online content would change? That the price of admission would change? That even the model of content negotiation would be different? That blogs would dominate the future of discourse, even the serious academic ones? That newspapers would ever fail?
Time flies. And models change. Some models are better at dealing with change, some are better at being future proof, but change they will. And when the models change, you must either change your own models, update your models, or use outdated models. FRBR and RDA are outdated models before they're even implemented. Please reconsider.
Model models
The last ten years or so there's been a stronger push towards meta models. That basically means "simple models in which you can create other models." One might wonder what such a crazy thing would do to help, but let me first exemplify through what I've seen again and again over the years I've worked as a consultant ;
The smallest changes to any complex system, where databases, tables, columns and rows must change, you've also got thousands of lines of query / SQL that also needs to change, every model along the way, from the hardwired entities of the database to the user interface controls must all be updated to this new way of looking at the world. It could even be the smallest of things, say, changing the name of a field in a table from "id" to "book_id" (some times even stupid things like this is needed because the people who create the original model [called schema] didn't worry about multi-join SQL statements that would have a hard time dealing with the ambiguity of the many varieties of "id" fields, or didn't have the foresight to think that more than one thing in your table could be rightly called 'id' ...) could cost in the millions. I know it sounds terribly stupid, probably even terribly untrue, but I swear on the grave of all those programmers who laid down their lives in pursuit of SQL ambiguity and integration and middle-tier testing, that it is a very sad truth.
The library is facing similar insurmountable trouble by switching to anything but MARC, and I suspect the cost analysis is in the multi-millions wherever you look. People are starting to ask if the change will be worth it (and with the criticism laid out about FRBR you might want to have a closer look before you leap), is there another way?
Well, sure, kinda. There's meta models, models that are somewhat ready but needs your lovely input and tweaking to get perfect. They are generally easier to deal with because, unlike models, they are designed to be a bit vague and, well, meta about it all. And yes, indeed; Topic Maps is such a technology. The model in topic Maps is simple ;
If you can wrap your head around such a concept, it's easy to build whatever you need with it, and here's the advantage ;
It doesn't matter one bit what data you put into it; all of the above still applies. You can make model-agnostic queries into your data. You can mix whatever data you feel like; nothing can't be put into it. You determine yourself what level of indirection you want on your data. And you can have serious identity management to boot! Did I mention author records done right? Chuck a thesaurii or faceted navigation systems right in the model! Make software modules understand certain languages rather than the combination of languages and data, and share these! Want to see what your data merged with any other data might look like? It's right there in the standard, it comes out of the box. Play with your data, and invent new ways of interacting with it without dicking around for weeks with databases, filtering and merging. And on and on it goes.
But why do this? Well, since the model exist in a domain which is specially designed to handle the disjointed nature of your models and data, they are free to shape whatever solution you might think of (meaning, you can change the interface without changing the model nor the application logic) where you in the past were stuck with the original model design. You can copy and paste your little models and languages around. Try out new things. Merge stuff with ease. And, not the least, focus on application specifics without worrying about model integrity. Nor do you have to worry about user interface integrity, either. How to put it in a way that you could understand? It's like taking a bucket of apples and a bucket of bananas, which in the past would when mixed together make a sticky slushy fruity goo that no one really likes, would now be genetically merged to make the banapple that can still be split apart into its raw apple and banana parts if you felt like it.
Yes, I've whinged and raved about this in the library world (and other places) for years and years, but getting people up to speed on understanding models, their implications and how meta models might be a better bet, then demonstrate and convince everyone (including people with no technical background), all while standing in an elevator with some people who's off to tinker with some RDA or MARC or something. It's hard to get their attention when they don't actually see the problem.
But I'm not pushing Topic Maps, really. Well, a little, but more specifically I'm pushing meta models, and I'm pushing for better ways of dealing with your computer infra structure, to take a few good steps out of the litter sandbox that permutes the current library systems infra structural designs, and get jiggy with the future before it gets overrun by the cool kids with iPhones and iPads and whatsnot's that also, you know, have a model or two. Models that may or may not be compatible with whatever model you come up with next. If you do. Seriously, I thought librarians loved meta?
These models are everywhere. They are how the brain takes a group of concepts - be it physical objects or knowledge nuggets in our head - and plonk them all into a grouping of sorts, and draw lines of meaning between the new group and the old it knows about. Together they form anything from thoughts to language to the reasoning used when voting. Between person A and political issue B we draw a relation type "opinion" with opinion C. A few fuzzy million of these, and we can pin you down pretty well.
Where do models live in our various systems? Well, they live in language, for the most part. And this is not some cryptic ballyhoo I'm inventing here, so I'll demonstrate with the humble book. Let's look at a description of a book in the foreign and mysterious language called Eksemelle ;
The book
<book>
<title>Some title</title>
<author>Clemens, Samuel</author>
<pompador>In a nutshell</pompador>
</book>
"book", "title", "author" and "pompador" mean something to someone. To most people who understand the English word "book", it means those rectangular objects made of dead-trees that's got some kind of letters and / or pictures in them. They are, as we say, semantic in that they have meaning to those who knows what those words mean. Here's the word "book", and it has a definition of sorts; that is explicit semantics. But then there's the implicit semantics of what those words mean in terms of this being an XML snippet, maybe from a specific bibliographic format, maybe with an even more specific XML schema (unless the XML schema is made explicit through something like a DOCTYPE). And finally there's the tacit semantics in the space between the model, the framework, and the people who work with it. Let's explore these shark-infested semantic parts.
How are they semantic? In what way are they meaningful, and to whom? Well, let's start with who. No model is really valuable unto itself; there's always some external framework that understands (and appreciates?) the model, some "thing" that looks at and interacts with that model in order to make it useful. Indeed, a models usefulness is often measured by how the semantics of various models match up. For example, the usefulness of the MARC model can be measured in terms by how well it matches the model librarians work with and need. Needless to say, the usefulness of the MARC model can also be measured how useful it is for a bricklayer, but more on this later.
So in what way are they meaningful? Every time we talk about models, we are really talking about a translation that's going on between models. The model of the words in this blog post is translated first from my brain and into the model used by my blogging software that uses some models of the Internet and computers and complex electronic networks and systems, and then translated to the model of your brain. We take a piece of semantics, and we try to make the transition from my brain to yours - through a multitude of other models - as smooth as possible. Have I succeeded so far? Does our models match up a little, much or not at all?
Things are meaningful when the models match up, when there is little or no difference between them to make understanding the thing in one model hard to understand in another. An example of a semantic mismatch in models are indeed the semantics of the 245$a field for a bricklayer looking for his bricks.
Constraints on entities
In the past I've talked about how models are constraints on entities, and this still holds true, but it needs a bit of clarification to make sense. First, what are entities?
Well, entities is one of those words that we can make to mean pretty much anything we like, but let's take a simple view of entities being "things you can talk about", similar (or exactly the same) as subjects in Topic Maps, "concept" in philosophy, or, to the layman, "things." Anything you can think of. Any subject, fictional or real, physical or surreal; a boat, a thought about the Moon, the idea of North, the concept of the number 1, the Eiffel Tower, MARC, a MARC record, an XML representation of that record, the book the MARC record represents, a physical book, the relationship between the book and the abstract notion of a book that the MARC record represents ... oh, the possibilities are - truer than anything! - endless.
Between the entities, in the cracks of our language and our understanding, flow their relationships. They are part of our model, those notions that give our entities a meaning of sorts, what makes the semantic;
"This book" was written by "this author"
Look at what we found in the cracks; "was written by." Isn't it grand? This trifecta is known in the Semantic Web world as a triplet, basically a tuple with a subject - predicate - object structure. Make enough triplet statements about a thing, and it grows in semantics. Let's look at our book ;
"This book" is our subject
"was written by" is our predicate
"this author" is the object
Let's make another triplet statement ;
"This book" is still our subject
"was published by" is a new predicate
"this publisher" is another object
"This book"
- "was written by" : "this author"
- "was published by" : "this publisher"
We got a subject (or an entity) that we're attaching predicates and objects to, in many ways similar to how we might add named key/value pairs of properties to an object, or create a lookup-table with column indexes between two tables in a relational database, or set named properties on a Java Bean, or even scribble two statements about a book in its margin. We're attaching some semantics to something.
When we attach meaning to something, we are constraining its possibilities that those properties might be something else. We are saying that the title of the book is X, and by saying it is X we can infer that it isn't millions and millions of other titles that it could have been. Before we put that title to the book it really had only two options;
Either the book had no title, or it could be any title we could imagine. But by labeling the book with a specific title, we're constraining both of these options, changing them dramatically from the endless possibilities we had to one specific option. We constrained the book by giving it meaning. And the more meaning we give it, the less abstract it becomes, the more constrained it becomes.
Reflections in the mirror
The models we have in our heads rarely perfectly match the model we're interacting with. The model in my wife's head is not matched well with my own model in many ways, like shopping, views on the value of shoes, the model of interacting with people (she's the one with a model closer matched to the generic likable model shared by most social and nice people) and the concept of geekery (of which she has none). And she is a person that's quite well matched in general. It only gets worse from here.
Now imagine the semantic distance between me and a computer system I've designed. I can work with it. I understand it. I can get my job done. However, I show my perfect model to a customer, and they immediately start picking it apart, pointing out how my model doesn't match their (or their individual) model. How could I have been so blind?
Here's a little secret to why usability and user-centered design works; you test your model against most other people's model in order to get to some model that you all can reasonably match with. When you don't test your model against the users models, they are bound to suck.
Models are reflections of how we humans see our world. The MARC standard most certainly reflect how the librarians saw the world, how it matched their needs and wants. My programs are often a reflection of how I see things. Your browser is a reflection of its developers model of how your browsing should be. This blog post is a reflection of me. Your computer a reflection of some manufacturer. The operating system a reflection of yet more developers views.
Sure, we try to create standards that either try to reflect some common model of things, or at least a common language in which to describe this view. However, it is terribly difficult to come to models that match well across so many thousands and millions of possible models. I'm tempted to almost say we need some constraints of commonalities on our models in order to create semantics to better understand our various models, to better agree and share them.
Where models sleep at night
Where do we find the actual models when we peer into the computer systems we use? Somewhere, surely, is that model in which we try to model to our own model rests, somewhere in there amongst the code and the interface we can point to it and say, "There!"
Mostly we can't, however there is one place where you'll find a lot, one place so holy in computer science that people dedicate whole careers to dealing with its innards; the relational database.
I need to speak about the relational database a bit because, well, so much of most computer systems out there a) have one, and b) store much of their model in there. Yes, there's alternatives, and more and more technology pop up that tries to do different, but let's be realistic about what 99% of most computer systems use, those RDBM systems. Let's have a look at a small corner of one ;
What we see is tables; columns of fields, rows of entries. These form the entities of these models. The example given even uses a tricky third table to function as a lookup-table between two tables of entity data (I won't go into too much technical details here). To get data in and out of these tables, we use a query language like SQL to say things like this simple example, "get all fields (with their rows of data) from tables A and B, where table A has a field 'book_id' that matches its value with a field 'id' in table B, sort it by the field 'title' of table A in descending order, and give me the first 40 results."
One SQL statement that matches for that is a semi-cryptic "SELECT * FROM TableA,TableB WHERE TableA.book_id = TableB.id ORDER BY TableA.title DESC LIMIT 40", or some other variety (there's tons of different ways of saying the same, with or without JOINs, sets and filters).
Let's look for our model. First of all, it's the titles of the tables, the titles of each column in them, and lastly the contents of the rows. Notice here that this is three levels of semantics nested within each query, and you need to know all of them to make reasonable statements. But what else is in our model? Well, it's those pesky relationships between things, the constraints on our entities to make the meaningful, and they exist in the query itself, in your application. Think about that for a second, think about where these things go ;
The table, columns and rows go in the database system. The querying goes in your application. The user interface (which we haven't even dug far into but is a wormhole of complexity in its own right) that interacts with you also sits in the middle, in the application. So there's a model in your database, and you reconstruct that model in your application (otherwise, how could you query the database if you don't know what it looks like?) and yet do further things that are not embedded in the database (and so, you've got a super model ... fun with pun!), translate further between your users and the user interface, translate back into the application which translates back into the database ... ah, what fun spaghetti games that makes.
Surely RDBMS and SQL is better than other alternatives? Well, it was for many years, and it was a way to solve the problem of doing things in far worse way, for sure. But we were also under the constraints of computing power in which we couldn't just do the right thing and still get a computer that gave you answers in time for Christmas. It was a compromise between the need for any answer to all your data, and that of a practical lots of answers to most of your data.
But this analogy can be taken further, especially into the world of MARC and the libraries. MARC itself was also designed with lots of peculiar constraints, funny rules and structuring, and then with the added AACR2 (and earlier friends) rules for manual data integrity, it surely reflected the best of breed at the time, reflected what they wanted and it was something that matched their onset. So we got the model of MARC (I've called it the culture of MARC in the past, but a model suits just as well) in MARC itself, in the rules we add to it, to our ILS, to our OPAC, our catalog, our acquisition, our collection management, everything. And then the model of MARC is everywhere, it even starts to dictate our human processes.
Time flies
But then time flies, and the world changes, sometimes unexpectedly and the harsher if it is. Just like there's a lot of push for alternatives to RDBMS these days because today and tomorrow is somewhat different from yesterday, the same with a push in the library world to go from MARC / AACR2 to something more like FRBR / RDA.
However.
When you create a model of the future you need to make sure it is future proof. You have to make sure that not only does the model match what you need right now, that it reflects the funky stuff you wanna do today, but it must be able to deal with the future. If it doesn't, well then you are going to have to go through the pains of changing the model sooner rather than later.
Here's a few thoughts on that process in the library perspective. FRBR was designed 15 years ago, when the world was sloooowly waking up the the new fresh brew of the Internet and technology. Take a good look at what the world was like back then, especially paying attention to the fact that books were still the main container for knowledge and information, mobile phones did nothing more than make calls, Internet devices were practically unheard of, no eBooks, no iPads, no eEducation, no eGovernment ... for Fraggs sake, the evil monster of Netscape was still alive and doing harm! I remember still writing Netscape specific code to deal with its quirks. This was a time before we all stopped hating the dying Netscape and focused on the evils of Internet Explorer instead. can you even remember back to that time and think that the world might be in your back pocket or pack in the shape of an eReader or iPhone, or that Amazon (in its infancy) would have a full-blown infrastructure including their own eReader with tons of titles a click away? Or perhaps even more profoundly, that WikiPedia would lead the way to the disjointed revolution of knowledge and information? That we would be twittering? That all higher educational institutions would move towards ePresses? That paper journals would turn to online journals? That the pricing models of online content would change? That the price of admission would change? That even the model of content negotiation would be different? That blogs would dominate the future of discourse, even the serious academic ones? That newspapers would ever fail?
Time flies. And models change. Some models are better at dealing with change, some are better at being future proof, but change they will. And when the models change, you must either change your own models, update your models, or use outdated models. FRBR and RDA are outdated models before they're even implemented. Please reconsider.
Model models
The last ten years or so there's been a stronger push towards meta models. That basically means "simple models in which you can create other models." One might wonder what such a crazy thing would do to help, but let me first exemplify through what I've seen again and again over the years I've worked as a consultant ;
The smallest changes to any complex system, where databases, tables, columns and rows must change, you've also got thousands of lines of query / SQL that also needs to change, every model along the way, from the hardwired entities of the database to the user interface controls must all be updated to this new way of looking at the world. It could even be the smallest of things, say, changing the name of a field in a table from "id" to "book_id" (some times even stupid things like this is needed because the people who create the original model [called schema] didn't worry about multi-join SQL statements that would have a hard time dealing with the ambiguity of the many varieties of "id" fields, or didn't have the foresight to think that more than one thing in your table could be rightly called 'id' ...) could cost in the millions. I know it sounds terribly stupid, probably even terribly untrue, but I swear on the grave of all those programmers who laid down their lives in pursuit of SQL ambiguity and integration and middle-tier testing, that it is a very sad truth.
The library is facing similar insurmountable trouble by switching to anything but MARC, and I suspect the cost analysis is in the multi-millions wherever you look. People are starting to ask if the change will be worth it (and with the criticism laid out about FRBR you might want to have a closer look before you leap), is there another way?
Well, sure, kinda. There's meta models, models that are somewhat ready but needs your lovely input and tweaking to get perfect. They are generally easier to deal with because, unlike models, they are designed to be a bit vague and, well, meta about it all. And yes, indeed; Topic Maps is such a technology. The model in topic Maps is simple ;
- a subject is anything you can ever think of, anything you want to talk about, anything in the "real" world, like books and people and cars and thoughts and ideas and ... well, anything.
- A subject is represented in a computer system by a Topic
- Topics have multiple Names, can be of multiple Types, have multiple Identites and multiple Occurrences
- Associations of different types tie them all together with roles
If you can wrap your head around such a concept, it's easy to build whatever you need with it, and here's the advantage ;
- A standardized data model and a standardized reference model
- A standardized XML format for exchange, import and export
- A standardized query language and a standardized constraint language
It doesn't matter one bit what data you put into it; all of the above still applies. You can make model-agnostic queries into your data. You can mix whatever data you feel like; nothing can't be put into it. You determine yourself what level of indirection you want on your data. And you can have serious identity management to boot! Did I mention author records done right? Chuck a thesaurii or faceted navigation systems right in the model! Make software modules understand certain languages rather than the combination of languages and data, and share these! Want to see what your data merged with any other data might look like? It's right there in the standard, it comes out of the box. Play with your data, and invent new ways of interacting with it without dicking around for weeks with databases, filtering and merging. And on and on it goes.
But why do this? Well, since the model exist in a domain which is specially designed to handle the disjointed nature of your models and data, they are free to shape whatever solution you might think of (meaning, you can change the interface without changing the model nor the application logic) where you in the past were stuck with the original model design. You can copy and paste your little models and languages around. Try out new things. Merge stuff with ease. And, not the least, focus on application specifics without worrying about model integrity. Nor do you have to worry about user interface integrity, either. How to put it in a way that you could understand? It's like taking a bucket of apples and a bucket of bananas, which in the past would when mixed together make a sticky slushy fruity goo that no one really likes, would now be genetically merged to make the banapple that can still be split apart into its raw apple and banana parts if you felt like it.
Yes, I've whinged and raved about this in the library world (and other places) for years and years, but getting people up to speed on understanding models, their implications and how meta models might be a better bet, then demonstrate and convince everyone (including people with no technical background), all while standing in an elevator with some people who's off to tinker with some RDA or MARC or something. It's hard to get their attention when they don't actually see the problem.
But I'm not pushing Topic Maps, really. Well, a little, but more specifically I'm pushing meta models, and I'm pushing for better ways of dealing with your computer infra structure, to take a few good steps out of the litter sandbox that permutes the current library systems infra structural designs, and get jiggy with the future before it gets overrun by the cool kids with iPhones and iPads and whatsnot's that also, you know, have a model or two. Models that may or may not be compatible with whatever model you come up with next. If you do. Seriously, I thought librarians loved meta?
Labels:
conceptual models,
data models,
frbr,
library,
marcxml,
rda,
topic maps
4 May 2010
Linux Ubuntu 10.04 upgrade, ATI MobileRadeon 3650 woes
(Note: If you're after my ATI MobileRadeon 3650 graphics card saga, scroll further down)
As mentioned in my more eclectic piece of yesterday, I stuffed up my computer over the weekend, and now I'm here to tell you all about it as well as tell you about the latest Ubuntu release.
I upgraded through the update manager as I've always done. I know a lot of people prefer to take backup, clean the machine, install a fresh Ubuntu version, and then copy back the personal stuff, install the stuff that went missing, as a means of a more secure and painless way. I sneer at such disdain for life and adventure!
I've never had trouble with just clicking the update button that extends beyond what minor snafus you might expect, like drivers to old stuff not working perfect with new stuff, but that's in the nature of going from old to new systems, and doesn't say anything bad on the upgrade process as such. For me, it worked pretty good.
But let's go through snags, one serious annoyance, an irritated state of affairs, and a few tips in dealing with your ATI Radeon issues ;
Snags
I had mostly no trouble at all, really, except after the install the following snags are noted ;
Serious annoyance
Irritating state of affairs
I have a Toshiba Satellite P300 (05Y01Y, which now is discontinued ... a true mark of comfort for me) that came with pre-installed Windows Vista which I killed by accident, and I popped Ubuntu 8.10 on there and have been reasonably happy with all this time. However, all this time there's a few things that still is terribly annoying, although I know this is the fault of vendors and not Linux (so I live with these blemishes rather than whinge about them). Over the updates things have improved (like Skype and Choqok), but there's still more hardware related issues that should have easy fixes ;
My Mobile Radeon 3650 woes
The ATI / AMD chipset has a shaky support history under Linux, but the good news is that there is some degree of support. And indeed, my screen right now is utilizing both OpenGL and higher screens and lots of colors, so all should be well. But it is not, at least not "all" well.
It's not that my latest upgrade killed my graphics setup. No, in fact after upgrade things worked quite well, except I had four ATI Control Center Catalyst icons under System -> Preferences, so I was trying to fix that, and see if there was a fix for the annoying "minimize,maximize delay" bug mentioned further up. But all of a sudden I must have fiddled with the wrong setting (actually, I was uninstalling some unrelated package that wasn't so unrelated after all).
Here's my receipt for fixing a seriously broken and mangled FGLRX install ;
As mentioned in my more eclectic piece of yesterday, I stuffed up my computer over the weekend, and now I'm here to tell you all about it as well as tell you about the latest Ubuntu release.
I upgraded through the update manager as I've always done. I know a lot of people prefer to take backup, clean the machine, install a fresh Ubuntu version, and then copy back the personal stuff, install the stuff that went missing, as a means of a more secure and painless way. I sneer at such disdain for life and adventure!
I've never had trouble with just clicking the update button that extends beyond what minor snafus you might expect, like drivers to old stuff not working perfect with new stuff, but that's in the nature of going from old to new systems, and doesn't say anything bad on the upgrade process as such. For me, it worked pretty good.
But let's go through snags, one serious annoyance, an irritated state of affairs, and a few tips in dealing with your ATI Radeon issues ;
Snags
I had mostly no trouble at all, really, except after the install the following snags are noted ;
- The brightness control no longer controls the brightness (and I now must use my special function keys to adjust it). No biggie, just annoying.
- My Vodafone 3G mobile network adapter software got uninstalled. Why? It was an independent third-party program. Puzzle.
- The applets (minimize and desktop switcher) on my bottom panel (Gnome) has crashed a couple of times. Odd.
- The system seems slower, but I need to investigate a bit further before pointing fingers.
Serious annoyance
Before we get to the good parts, I need to point out this one thing that just stands out as the dumbest thing with this release, and it is just so stupid that no amount of trying to back-pedal out of it with the lamest excuses ever is good enough;
They switched the minimize, maximize and close window icons from the right side (like the rest of the world does it) to the left (like what retarded systems used to do in the 80's and 90's)! Luckily there's a couple of easy fixes, but none which are embedded in the system to make it easy. Friggin' idiots, what were they thinking? I could write a whole book on just how stupid this was, but suffice to say, my over 10 years of UX and GUI design experience - and my personal experience in using these here new (but terribly old) ideas - screams how much it sucks big time! Icons of window control now sit on top of menus that you use all the time, and the cognitive distance between doing what you want and what will piss you off has now been reduced to a swearword's distance on a regular basis. I seriously hope this is not a hint of things to come, or I'll jump ship. I can deal with the odd snafu, but I don't want to deal with continued stupidity.
Irritating state of affairs
I have a Toshiba Satellite P300 (05Y01Y, which now is discontinued ... a true mark of comfort for me) that came with pre-installed Windows Vista which I killed by accident, and I popped Ubuntu 8.10 on there and have been reasonably happy with all this time. However, all this time there's a few things that still is terribly annoying, although I know this is the fault of vendors and not Linux (so I live with these blemishes rather than whinge about them). Over the updates things have improved (like Skype and Choqok), but there's still more hardware related issues that should have easy fixes ;
- BlueTooth does not work, and probably never will. Toshiba sucks when it comes to supporting Linux, and their BlueTooth stack obviously is not supported here, nor are the default Linux stacks compatible with whatever my machine's got. However, I'm not in need of BlueTooth so I haven't spent the appropriate crazy time needed for proper investigation.
- There's an insanely annoying delay (between 0.5 to 2 seconds long!) whenever a window is minimized or maximized (which also means when the application starts up, hides, reappears or closes down, you know, the sort of stuff you do all freakin' time) when I've got graphics acceleration switched on. This is apparently a bug, and it's been around and come back in various forms. There's a few suggested fixes, but mostly it just doesn't seem to want to go away. This one is seriously bad, and it amazes me that something like this could still be hanging around in an otherwise polished distribution like Ubuntu. There might be a link to the ATI drivers below as well, but the sad part is that to temporary no go insane over this, I sadly have to switch graphics acceleration off.
Update! It seems that after trying the suggested fixes for enabling 2D that didn't at first do anything, now all of a sudden the problem disappeared! Oh, joy! Come to think of it, I did do a reboot not that long ago, and re-installed the desktop switcher applet. Maybe there were some correlation between the switcher and the graphics driver? Who knows. I'm happily running Compiz and Emerald (although the latter is only an experiment I suspect will be uninstalled). - The two internal microphones on the machine has never worked, probably never will again due to Toshibas poor Linux support. Crikey, can't they just hire one or two Linux guys to at least tweak existing software with configuration options that might be compatible with some of their more popular models? I could spend a weekend to do this myself, but I've got an USB DSP that does these things which works just fantastic, so again I won't spend the time chasing it.
- My external Logitech Optical Radio mouse does not work properly. It works in the sense that the accelerator insist that my whole world is either top-left or bottom-right, and there's no easy ways in Ubuntu to modify crazy acceleration, so I've just given up on it and use it on my Windows machine.
- The internal ATI MobileRadeon 3650 graphics card saga; this one gets its own section below.
Generics
To me, this upgrade was a bit meh. Nothing stands out as impressive, just small fixes here and there. There's the new menu's at the top so many people rave about, but I don't like nor use them, so meh. The new colors are, well, meh. I didn't mind brown. Not sure I mind purple. Meh. I like that more icons are getting the same style and form, but no biggie. The Software Center has been prettied up a bit, but still lacks what's really important, like testimonials, ratings, awards, versioning, people's comments, recomendations, field groupings, and so on. Meh. I'm sure there's more stuff around, and I'll find them eventually. The only real thing that stands out is that some old problems still aren't fixed, and my system is less responsive. Oh, and they switched from Sun to the OpenJDK Java which threw me off for a few minutes as well. Meh.
The ATI / AMD chipset has a shaky support history under Linux, but the good news is that there is some degree of support. And indeed, my screen right now is utilizing both OpenGL and higher screens and lots of colors, so all should be well. But it is not, at least not "all" well.
It's not that my latest upgrade killed my graphics setup. No, in fact after upgrade things worked quite well, except I had four ATI Control Center Catalyst icons under System -> Preferences, so I was trying to fix that, and see if there was a fix for the annoying "minimize,maximize delay" bug mentioned further up. But all of a sudden I must have fiddled with the wrong setting (actually, I was uninstalling some unrelated package that wasn't so unrelated after all).
Here's my receipt for fixing a seriously broken and mangled FGLRX install ;
- Purge your filthy system of sin ;
sudo apt-get remove --purge fglrx* sudo apt-get remove --purge xserver-xorg-video-ati sudo apt-get remove --purge xserver-xorg-video-radeon
- If you get an error like so ;
dpkg-divert: mismatch on package
when removing `diversion of /usr/lib/libGL.so.1.2 to /usr/lib/fglrx/libGL.so.1.2.xlibmesa by fglrx'
found `diversion of /usr/lib/libGL.so.1.2 to /usr/lib/fglrx/libGL.so.1.2.xlibmesa by xorg-driver-fglrx'
Go to this page over here for general instructions on fixing diversions. - Go to System -> Admin -> Hardware drivers, and wait for it to collect its thoughts.
- Click on "Install" or "Activate" near the bottom, and wait for it to do its magic.
- Reboot.
That worked for me; your mileage will vary. Downloading the (massive; 96Mb!!) driver from ATI's homepage, purging and running the downloaded installer did nothing for me. The whole ATI graphics driver brahooha is amazingly bad and should get some cleaning up. Even the Control Center that ATI delivers looks amateurish, and it seems their software lacks any good cleaning up strategies. Or even nice GUI ways of setting up important settings (!!).
The next time I'm buying a computer for my Linux Ubuntu adventures, I'll do some pretty hefty research first to make sure I avoid at least some of the biggest pitfalls. Bloody stupid vendors. Meh.
3 May 2010
I turned 30 and my warranty expired
Todays post is different. It is not about any of the usual subjects I relay here, but about growing old, growing up, growing wary and fearing not only the future of me and my family, but growing wary of humans out of touch with reality, the political rot of the societies we create, and what power and resources together can do to harm our statistics and the people within them. And I want to give Love a brief mention, too.
(I also post this somewhat different topic because my work computer is a bit temperamental after I performed a Ubuntu upgrade, so I'm on our other computer today)
"I turned 30 and my warranty expired"
I'm actually quite a bit older than that, but not quite yet 40, but the sentiment still stands out as a sore thumb. My brain, as it matured and gained a platform of consistency, so did other parts of my body mature to their natural consequences. Slowly I'm feeling age creep up on me, a dodgy knee, faltering memory, eyes and neck trouble, difficulties sleeping.
"I turned 30 and my warranty expired"
I'm actually quite a bit older than that, but not quite yet 40, but the sentiment still stands out as a sore thumb. My brain, as it matured and gained a platform of consistency, so did other parts of my body mature to their natural consequences. Slowly I'm feeling age creep up on me, a dodgy knee, faltering memory, eyes and neck trouble, difficulties sleeping.
We all know the physical constraints of growing older. But I notice more and more the inner-space effects. And no, I'm not getting old and bitter as the old stereotype will have it. No, I'm turning sad and lonely.
Such a statement, for me, have so many levels it is a bit hard to know where to begin, but let's begin with the simple and obvious ones. I am Norwegian, my wife is Australian. Where do we live? If either of us came from some country that is statistically worse off than the other, the choice would be easier. If I came from the slums somewhere in Nigeria and she from, say, England or Germany, that's not a choice; it would be a better life for us in Europe, for our children, no question. Even if we have to abandon family in Nigeria, everybody would understand to some extent. Opportunities and security for a raising family beats out war and poverty every time. But Norway and Australia?
Both countries are rated as some of the absolute best countries in the world to live in. And by that alone there are proponents for each family on both sides that can't understand the struggle to actually choose one side. We've so far lived twice in each country as a family with small children, and there's pro's and con's to each side in all aspects of our lives. It's painful to have to make a choice.
Less obvious
Since we are a family, we are a tightly knit group of people who love each other. We stand by each other, support each other, and generally try to make the best of our situations, however they come at us. And we are the best of friends.
But I used to have other friends as well, include a few best friends to whom I was very close. We currently live in (and I suspect for the immediate to intermediate time) Australia. When I moved here all my friends and family were left behind. We moved to a new area. Australians are hard to befriend in any meaningful way. So I quite often feel alone, outside of the family space. All my interests and passions, especially those I do not share with my wife, are hard to deal with ; my strong passion for baroque music. My deep love of science. Love of food. Bibliophelia and meta data. Geekery. Internet life. Knowledge management. Philosophy (ethics and otherwise). Inventions! Engineering!
Of course we make up for that in the things we do share, and this is still quite a lot to keep me sane and reasonably happy, and it helps to share these passions with her and the kids (to the level they understand). But I could use a smidgen of intellectual interest in these ignorant marshes we live.
Even less obvious
World events affect me. Some affect me because they are world events, but the ones affecting the most are the world events that other people around me don't know or care about. Living in an intellectual desert makes you conservative about the academic nurture you seek out. I feel somewhat ashamed in normal talks with people to reveal any hint of knowing stuff, even stuff that's not really all that controversy or hard or uncommon.
What to do when you're aching to tell the world the wonders of the universe, and no one around your really cares to listen? You start making silly plans ;
- Making a documentary about beach ecology and geology. I've got fact checking and research done, but I'm waiting to buy a HD camera (that can also do underwater shots) for my birthday. I'm going to channel my inner deGrasse Tyson.
- Plan to teach your children about the speed of light; an applicable model of air and space travel, and gee, what's the point of all of that? And see the whole "Cosmos" series by St. Sagan.
- Plan to travel to the middle of the desert, looking for minerals, metals and / or interesting biology
But they are somewhat silly plans. They might happen, but the odds are against me, mostly due to work and family attire. It's not that I don't want to do these things, heavens know I do, but time is limited, and well, I need to make better priorities.
Where should we live? Where is the best place for our family to live? Should we choose easy life, rich life, busy life, interesting life? Should we sacrifice our complacent sheltered lives for richer experiences in doing good in other countries? Should we apply our collective strengths to make somewhere else who do care about bigger issues a better place for all who live there? Should we risk the personal safety of the now for a idealistic future of others?
Not obvious at all
I feel a disconnection between myself and the society we live in. I am not satisfied in having a safe life in a protected environment with the latency of science penetration lower than what's livable. There are so many topics to discuss, so many issues to sort out.
Politics suck. And I used to be a politician, so I know quite well how rotten the discourse of democracy can get when agendas are played at different levels of the game. Don't get me wrong; I love democracy - the process. I just hate the people who usually enter into it. And I hate the way the cult of personalities distorts, disrupts and corrupts the ideals laid out. And most of all, I hate how people in general have no friggin' clue about the implications of epistemology in politics, how their own ignorance of science and facts is having an adverse affect on societal progression.
Yes, I like progressive thoughts and actions, not for the sake of just progression, but for the ideal of not falling into staleness and for always question the status quo; what can we do better? Even the best of things can be improved. I'm no fan of "if it works, don't fix it!"
Adequate.
That's a word I hate to the fullness of my person. I don't have a problem with being cautious, but there's a thin line between cautious and conservative, one which I see society fail all too often. Most often it's done in a way to appease popular opinion. But popular opinion is, by its very definition, only popular and only an opinion. Why are we chasing this rather than necessary facts?
Back to less obvious
Politics in general is an attempt by the populous to reach an agreeable direction of channeled progression, but I've always held that for this to work you need to populous to also care about, study, and be able to argue with logic about the issues at hand, without falling into the pit of bias and the fear of change.
Change is the key to the universe and all that is within it; all things change, over various kinds of time scales. Rock and metal change over longer time of millions of years than one-celled animals and viruses that have time scales of hours or even minutes, and then there's everything in between. The scale in which you view the universe is a fascinating exercise in trying to understand the world around you, and by carefully understanding the context of those scales you can get a pretty amazing overview of the scope and direction of huge topics like life, evolution, geology and cosmology, and perhaps get a better understanding of your place in the cosmos.
Back to obvious
My place in the cosmos is that of insignificance. Well, I know the physical outlook, but as a human sentient being who would like the world to be my oyster, I can not only understand my insignificance but appreciate it, love it, live it. There's great power in knowing you are invisible; it allows you to see clearer that which you love.
My family comes first; they are my pillar, the platform on which I base my life, so much is clear. And as a unit it doesn't matter what the cosmos throw our way as our strength and love is in that unit. It's a self-sufficient ecosystem of family ebb and flow. Politics flow over, through and around us. People do the same. Society is the sand on our beach, tumbling around, always shifting and changing.
There's nothing so important as to embrace change. And in this light I view the progressive nature of the human endeavor; humans fighting change every chance they get. Poverty and despair is as much a result of this as is the unbalanced resource management, the political systems and the populous scientific interest and literacy.
And that is they key to my disjointedness from society, as well as the thing that keeps me going; my family. No matter where we are, no matter what friends we accumulate, we are together.
And even if it is obvious, it's not really that obvious.
Subscribe to:
Posts (Atom)