24 July 2008

Holiday blues

Sam's got the blues ...



... and we'll be back to "normal" next week. Hope you've all had a great time. We went to Denmark for a quickie, and to the mountains of Trysil (Norway) for a weeks cabin-fever fun. We've had a good time, but it's good to get back, too. Now, what was I doing again?

3 July 2008

Round and round it goes

This morning was a good one. I got on the bus, armed with breakfast banana in hand, and right there in front of me sat fellow Topic Mapper Stian Danenbarger (from Bouvet), who happened to be living just literally down the road from me. I've been living at Korsvoll (in Oslo) for 6 months now without bumping into him, how odd is that?

Anyways, the last few days I've written about Language and Semantics and about context for understanding communication (all with strong relations to programming languages), and needless to say this became the topic (heh) of discussion on the bus this morning as well.

In this post I'll try to summarize the discussion so far, implement the discussion I had on the bus this morning, coupled with a discussion I've had with Reginald Braithwaite on his blog, from "My mixed feelings about Ruby". Let's start with Reginald and move backwards.

Background
Matz has said that Ruby is an attempt to solve the problem of making programmers happy. So maybe we aren’t happy with some of the accidental complexity. But can we be happy overall? Can we find a way to program in harmony with Ruby rather than trying to Greenspun it into Lisp?
I think that the goal of making programmers happy is a good one, although I suspect there's more than one way to please a programmer. One way is perhaps rooted in the syntax of the language at hand. Then there's the semantics of your language keywords. Another is to have good APIs to work with. Another is how meta the language is (i.e. how much freedom the programmer has in changing the semantics of the language, where Lisp is very meta while Java is not at all), and yet another is the community around it. Or the type and amount of documentation. Or its run-time environment. Or how the code is run (interpreted? compiled? half-compiled to bytecodes?).

Can we find ways in programming that would make all programmers happy? I need to now point back to my first post about Language and Semantics and simply reiterate that there's a tremendous lack of focus on why we program in most modern programming languages. Their idea is to shift bits around, and seldom to satisfy some overal more abstract problem. So for me it becomes more important to convey semantics (i.e. meaning) through my programming more than just having the ability to do so. Most languages will solve any problem you have, so what does the different languages offer us? In fact, how different are they most of the time?
At this moment in time I have extremely mixed feelings about Ruby. I sorely miss the elegance and purity of languages like Scheme and Smalltalk. But at the same time, I am trying to keep my mind open to some of the ways in which Ruby is a great programming language.
I think we really agree here. My own experiences with over 8 years of professional XSLT development (yes, look it up :) has taught me some valuable lessons about how elegant functional programming can be, just like Lisp and the mix-a-lot SmallTalk (which I like less of the two). But then I like certain ways that Ruby does things too, with a better syntax for one. I like to bicker about syntax. Yeah, I'm one of those. And I think I bicker about syntax for very good reasons, too;

Context

In "just enough to make some sense" I talk about context; how many hints do we need to provide in order to communicate well? Make no mistake; when we program, we are doing more than solving the shifting of bits and bytes back and forth. We are giving hints to 1) a computer to run the code, and 2) the programmer (either the original developer, or someone else looking at her code). Most arguments about syntax seems to stem from 1) in which 2) becomes a personal opinion of individuals rather than a communal excericse. In other words, syntax seems to come from some human designer trying to express hints best to the computer in order to shift bits about, instead of focusing entirly on their programming brothers and sisters.

In the first quote about Ruby being designed in order to please the programmer, that would imply that 2) was in focus, but the focus of that quoted statemement is all wrong; it pleases some programmers, but certainly not all, otherwise why are we even talking about this stuff?

Ok, we're ready to move on to the crux of the matter, I think.
I am arguing that while it is easy to agree that languages ought to facilitate writing readable programs, it is not easy to derive any tangible heuristics for language design from this extremely motherhood and apple pie sentiment.
Readability is an important and strong word. And it is very important, indeed. We need everything to be readable, from syntax to APIs to environments and onwards. I think we all want this pipe-dream, but we all see different ways of accomplishing it. Some say it's impossible, others say it's easy, while people like Reginald I think is right there in the middle, the ultimate pragmatic stance. And if I had never done Topic Maps I would be right there with him. Like Stian Danenberger said this morning, there's more to readability than just reading the code well.

Topic Maps

Yeah, it's time talk about what happens when you drink the kool-aid and you accept the paradigm shift that comes with it. There's mainly x things I've learned through Topic Maps;
  • Everything is a model, from the business ideals and processes, to design and definition, our programming languages, our databases, the interaction against our systems, and the human aspect of business and customers. Models, models, everywhere ...
  • All we want to do is to work with models, and be able to change those models at will
  • All programming is to satisfy recreating those models
Have you ever looked at model-driven architecture or domain-driven design? These are somewhat abstract principles to creating complex systems. Now, I'm not going to delve into the pros and cons of these approaches, but merely point out that they were "invented" out from a need that programming languages didn't solve, namely the focus on models.

Think about it; in every aspect of our programming life, all we do is trying to capture models which somehow mimics the real-life problem-space. The shifting of bits wouldn't be necessary if there wasn't a model were working towards. We create abstract models of programming that we use in order to translate between us humans and those pesky computers who's not smart enough to understand "buy cheap, sell expensive" as a command. This is the main purpose of our jobs - to make models that translate human problems into computer-speak - and then we choose our programming language to do this in. In other words, the direction is not language first then the problem, but the other way around. In my first post in this series I talked about tools, and about choosing the "right tool for the job." This is a good moment to lament some of what I see are the real problems of modern programming languages.

What objects?

Object-oriented programming. Now, don't get me wrong, I think OOP is a huge improvement over the process-oriented imperative ways of the olden ways. But as I said in my last post, it looks so much like the truth, we mistakenly treat it as truth. The truth is there's something fundamentally wrong with what we know as object-oriented programming.

First of all, it's not labeled right. Stian Danenbarger mention that someone (can't remember the name; Morten someone?) said it should be called "Class-based programming", or - if you know the Linnean world - taxonomical programming. If you know about RDF and the Semantic Web, it too is based loosely on recursive key/value pairs, creating those tree-structures as the operative model. This is dangerously deceitful, as I've written about in my two previous posts. The world is not a tree-structure, but a mix of trees, graphs and vectors, with some semi-ordered chaos thrown in.

Every single programming approach, be it a language or a paradigm like OOP or functional, comes with its own meta model of how to translate between computers and the humans that use them. Every single approach is an attempt to recreate those models, to make it efficient and user-friendly to use and reuse those models, and make it easy to change the models, remove the models, make new ones, add others, mix them, and so on. My last post goes into much detail about what those meta models are, and those meta models define the communication from human to computer to human to computer to human, and on and on and on.

It's a bit of a puzzle, then, why our programming languages focus less on the models and more on shifting those bits around. When shifting bits are the modus operandi and we leave the models in the hands of programmers who normally don't think too much about those models (and, perhaps by inference, programmers who don't think about those models goes on to design programming languages in which they want to shift bits around ...), you end up with some odd models, which at most times are incompatible with each other. This is how all models are shifted to the API level.

Everyone who has ever designed an API knows how hard it can be. Most of the time you start in one corner of your API thinking it's going smooth until you meet with the other end, and you hack and polish your API as best you can, and release version 1.0. If anyone but you use that API, how long until requests for change, bugs, "wouldn't it make more sense to ...", "What do you mean by 'construct objects' here?", and on and on and on. Creating APIs is a test of all the skills you've got. And all of the same can be said about creating a programming language.

Could the problem simply be that we're using a taxonomic programming language paradigm in which we try to create a graph structured application? I like to think so. Why isn't there native support in languages for typed objects, the most basic building block of categorisation and graphing?

$mice = all objects of type 'mouse' ;

Or cleanups?

set free $mice of type 'lab' ;

Or relationships (with implicit cardinality)?

with $mice of type ('woodland')
add relationship 'is food' to objects of type 'owl' ;

Or prowling?

with $mice that has relationship to objects of type ('owl')
add type ('owl food') ;

Or workflow models?

in $workflow at option ('is milk fresh?') add possible response ('maybe')
with task ('smell it') and path back to parent ;

[disclaimer : these are all tounge-in-cheek examples]

I know you can extend some languages to do the basic bidding here, for example in JavaScript I can change the prototype for basic objects and types, but it's an extension each programmer must make and the syntax is bound to the limits of the meta model of the language, amking most such extensions look kludgy and inelegant. And unless they know all the problems that I think we've been talking about here, they really won't do this. This sort of discussion certainly does not appear where people learn programming skills.

No, most programming languages follow the tree-structure quite faithfully, or more precise the taxomatic model (which is mostly trees but with the odd jump (relationship) sideways in order to deal with the kludges that didn't fit the tree). Our programs are exactly that; data and code, and the programming languages define not only the syntax for how to deal with the data and code, but the very way we think about dealing with blobs of data and code.

They define the readability of our programs. So, Reginald closes;
Again we come down to this: readability is a property of programs, and the influence of a language on the readability of the programs is indirect. That does not mean the language doesn't matter, but it does make me suspicious of the argument that we can look at one language and say it produces readable programs and look at another language and say it does not.
Agreed, except I think most of the languages we do discuss are all forged over the same OOP and functional anvil, in the same "shifting the bits and byes back and forth" kind of thinking. I think we need to think in terms of the reason we program; those pesky models. Therein lies the key to readability, when the code resembles the models we are trying to recreate.

Syntax for shifting bits around

Yes, syntax is perhaps more important than we like to admit. Syntax defines the nitty-gritty way we shift those bits around in order to accomplish those modeling ideals. It's all in the eyes of the beholder, of course, just like every programming language meta model have their own answer. What is the general consensus on good syntax that convey the right amount of semantics in order for us all to agree to its meaning?

There's certain things which seems to be agreed on. Using angle brackets and the equal sign for comparators of basic types, for example, or using colon and equal to assign values (although there's a 50/50 on that one), using curly brackets to denote blocks (but not closures), using square brackets for arrays or lists (but not in functional languages), using parenthesis for functional lists, certain keywords such as const for constants, var for variables (mostly loosly typed languages, for some reason) or int or Int for integers (basic types or basic type classes), and so on. But does any of this really matter?

As shifting bytes around, I'd say they don't matter. What matters is why they're shifting the bytes around. And most languages don't care about that. And so I don't care about the syntax or the language quirks of inner closures when inner closures are a symptom of us using the wrong tools for the modeling job at hand. We're bickering about how to best do it wrong instead of focusing on doing it right. Um, IMHO, of course, but that's just the Topic Maps drugs talking.

Just like Robert Barta (who I'd wish would come to dinner more often), I too dream of a Topic Maps (or graph based) programming language. Maybe it's time to dream one up. :)

2 July 2008

Just enough to make some sense

I've realized that my previous post on language and semantics could possibly be a bit hard to understand without having the proper context wrapped around it, so today I'll continue my journey of explaining life, universe and everything. Today I want to talk about "just enough complexity for understanding, but not more."

Mouses

Let's talk about mouse. Or a mouse. Mice. Let's talk about this ;

One can argue whether this is really enough context for us to talk about this thing. What does "mouse" mean here? The Disney mouse? A computer mouse? The mouse shadow in the second moon? In order for me to communicate clearly with my fellow human beings I need to provide just enough information so that we can figure this out, so I say "mouse, you know the furry, multivorus, small critter that ..." ;


This is too much information, at least for most cases. I'm not trying to give you all the information I know about mice, but just enough for me to say "I saw a mouse yesterday in the pantry." Talking about context is incredibly hard, because, frankly, what does context mean? And how much background information do I need to provide to you in order for you to understand what I'm talking about?

In terms of language "context" means verbal context as words and expressions that surrounds a word, and social context as the connection between the words and those who hear or read them based on the human constraints (age, gender, knowledge, etc.) There's also some controversy about this, and we often also imply certain mental models (social context of understanding).

In general, though, we talk about context as "that stuff that surrounds the issue", from solid objects, ideas, my mental state, what I see, what I know, what my audience see and knows, hears, smells, cultural and political history, musical tastes, and on and on and on. Everything in the moment and everything in the past in order to understand the current communication that takes us to the future.

Yup, it's pretty big and heady stuff, and it's a darn interesting question; how much context do you need in order to communicate well? My previous post was indeed about how much context we need to put into our language and definition in order to communicate well.

A bit of background

Back in 1956 a paper by the cognitive psychologist George A. Miller changed a lot of how we think about our own capacity for juggling stuff in our heads. It's a most famous paper, where further research since has added to and confirmed the basic premise that there's only so much we're able to remember at the same time. And the figure that came up was 7, plus / minus 2.

Of course that number is specific to that research, and may mean very little in the scheme of more specific settings. It's a general rule, though, that hints to the limits we have in cognition, in the way we observe and respond to communication. And it certainly helps us understand the way we deal with context. Context can be overly complex, or overly simple. Maybe the right amount of context is 7, plus / minus 2?

Just right



I'm not going to speculate much in what it means that "between 5 and 9 equally-weighted error-less choices" defines arbitrary constraints on our mental storage capacity (short-term especially), but I'll for sure speculate that it guides the way we can understand context, and perhaps especially where it's loosely defined.

We humans have a tendency to think that those things that looks like the truth must be the truth. We do this perhaps especially in the way we deal with computer systems, because, frankly, it's easy to define structures and limitations there. It's what we do.

An example of this is how we observe anything as containers that may contain things, that in themselves might be containers which might be things or more containers, and so on. Our world is filld with this notion, from taxonomies, to object-oriented programming, to XML, to how we talk bout structures and things, to how science was defined, and on and on and on. Tree-structures, basically.

But as anyone with a decent taxonomic background knows, taxonomies don't always work as a strict tree-structure. Neither does anyone who's meddled in OO for too long. Or fiddled with XML until the angle-brackets break. These things looks so much like the truth that we pursue them as truth.

things are more chaotic than we like. They're more, in fact, like graph structures, where relationships between things go back and forth, up and down, over and under already established relationships. It can be quite tricky, because the simple "this container contains these containers" mentality is gone, and a more complex model appears;


This is the world of the Semantic Web and Topic Maps, of course, and many of the reasons why these emerging technologies are, er, emerging is of course because all containers aren't containers at all, and that the semantics of "this things belongs to that thing" isn't precise enough when we want to communicate well. Explaining the world in terms of tree-structures puts too many constraints on us, so many that we spend most our time trying to fit our communication into it rather than simply defining them.

We could go back to frames theory as well, with recursive key/value properties that you find naturally in b-trees, where values are either a literal, or another property. RDF is based on this model, for example, where the recursiveness is used for creating graph structures. (Which is one reason I hate RDF, using anonymous nodes for literals)

Programming languages and meta models

Programming languages don't extend the basic pre-defined model of the language much. Some languages allow some degree of flexibility (such as Ruby, Lisp and Python), some offer tweaking (such as PHP. Lua and Perl), while others offer macroing and overloading of syntax (mostly C family), and yet more are just stuck in their modeling ways (Java). [note: don't take these notions too strictly; there's a host of features to these languages that mix and match various terms, both within and outside of the OO paradigm]

What they all have in common is that the defined meta model is linked to shifting bits and bytes around a computer program, and that all human communication and / or understanding is left in the hands of programmers. Let's talk about meta models.

Most programming languages have a set of keywords and syntax that make up a model of programming. this is the meta model; it's a foundation of a language, a set of things in which you build your programs on. All programming languages have more or less of them, and the more they have, the stricter they usually are as well. Some are object oriented languages, other functional, some imperative, and yet other mixes things up. If I write ;

Int i = new Int ( 34) ;

in Java, there's only so many ways to interpret that. It's basically an instance of the Integer class, that holds the integer number of 34. But what about

$i = new Int ( 34 ) ;

in PHP? There is no built-in class called Int in PHP, so this code either fails or produce an instance of some class called Int, but we do not know what that means, at least not at this point. And this is what the meta model defines; built-in types, classes, APIs and the overall framework, how things are glued together.

As such, Java and .Net has huge meta models defined, so huge that you can spend your whole career in just one part of it. PHP has a medium meta model, Perl even smaller, all the way down to assembler with a rather puny meta model. Syntax and keywords is not just how we program, but they define the constraints of our language. There's things that's easy and hard in every language, and there is no one answer to what the best programming language is. They all do things differently.

The object-oriented ways of Java differ to the ones of Ruby which differs to the ways of C++ which differs to the ways of PHP. The functional ways of Erlang differs to XSLT which differs to Lisp.

The right answer?

There is no right answer. One can always argue about the little differences between all thse meta models, and we do, all the time. We bicker about operator overloading, about whether mutliple inheritance is better than single inheritance, one the real difference between interfaces and abstract classes, about getter and setter methods (or lack thereof), about types should be first class objects or not, about what closures are, wheter to use curly-brackets or define programming structure through whitespace, and on and on and on.

My previous post was another way of saying that we perhaps should argue less about the meta model of our language, and worry more about the reason the computer was created more than how a certain problem was solved? We don't have the mental capacity to juggle too much stuff around in our brains, and if the meta model is huge, our ability to focus on perhaps the important bits become less.

There are so many levels of communication in our development stack. Maybe we should introduce a more semantically sane model into it to move a few steps closer to the real problem, the communication between man and machine? I'm not convinced that OO nor functional programming solves the human communication problem. let's speculate and draw sketches on napkins.