Moving to Artima
I'm moving this weblog to Artima; here's the link. In the first post, I explain why.
The existing posts that you see here will remain.
MindView Home Page
I'm moving this weblog to Artima; here's the link. In the first post, I explain why.
The existing posts that you see here will remain.
MindView Home Page
I've been quiet here for various reasons, mostly because of a lot of traveling -- right now I'm on a 1-month marathon consulting tour for 3 different companies, all in understanding objects and object design. It's been very consuming, but also productive.
Because a number of people have (appropriately) nudged me about it, I finally took some time and set up the seminar registration for the upcoming Thinking in Patterns seminar, which will take place June 20-24 2005, in Crested Butte, Colorado. You can read more and register here.
I will be updating the online book and the seminar materials to reflect changes in J2SE5, and to add new information and patterns that I've acquired since the last time I've worked on the book (I'm still working on finishing Thinking in Java, 4e so I haven't been working on TIPatterns so much, but an upcoming seminar always spurs me on to new work on the book).
MindView Home Page
I think it's been years since I've given a talk that was less than 45 minutes or an hour. Everyone except the keynote speakers at PyCon had 1/2 hour. Naturally I had too much material and had to blaze past the slides at the end. Many people were very kind and told me they had enjoyed the talk but it felt unresolved to me. All the talks were both recorded and videotaped, and so eventually you should be able to hear or see it and decide for yourself -- not just my talk, but the entire conference! (The actual distribution strategy has not yet been devised, so you should watch the www.Python.org website).
It was again interesting to experience and contrast the SD conference with PyCon.
SD is best described as a teaching conference. There are half-day and full-day pre-conference tutorials, and during the conference all the talks are 1.5 hours long (I created both the C++ and Java tracks for the SD conference and chaired them for many years, and so I helped evolve this structure). All the subjects are intended to be well-established on the acceptance curve; this is a commercial conference and they want each topic to have a strong draw. SD is a good place to go in order to develop expertise, almost as if it were a multi-subject professional development seminar.
PyCon is a community conference. It is actually developed and organized by volunteers from the Python community, and any profits go into the Python Software Foundation (PSF), which promotes the language and lately has even begun to give grants for the development of various aspects of Python (a recent grant was to update Jython, for example). PyCon is thus much more edgy and experimental and this adds a lot of excitement to the conference. For example, I had almost forgotten how great the lightning talks are. These are 5 minute talks about whatever you are doing or find interesting. The 5-minute limit is an example of a "liberating constraint," because people are willing to take far more risks if it's only 5 minutes. Also, you can easily get a last-minute inspiration and do a talk. Because it's 5 minutes, you don't dally or wander -- you get right to the meat of your subject. And for the audience, it's especially nice because if there's a talk that doesn't interest you or you don't understand, all you have to do is wait 5 minutes. And the things that you learn are far broader than you will find in the more formal talks. My favorite was a use of PyGame, a 3-D game-building framework that's been around for years and continues to amaze me, and is apparently used for commercial products. The presenter apparently holds a contest each year, and this year's theme was randomness. He had created a game called "Ducks" which had delightfully primitive drawings for the graphics. The ducks would chase you, drop things, etc. We didn't see much of it but it was hilarious; a game I'd like to play.
PyCon also has "sprints" before or after the conference proper, where you get together around a topic and work on it for 1-4 days; typically this involves writing code but I coached one where we were just exploring a topic; it was really more of an OpenSpace but I found it helpful.
Although I learned a great deal from the half-hour presentations, the lightning talks and OpenSpace events were what passed the nod test. When I travel to a different time zone I don't sleep that well, and the best way to help me catch up on my sleep is to put me in a room and do an eyes-forward presentation. Since I'm not interacting, the steady words from the presenter will often send me off. It's clear in those situations that I'm sleep deprived. But when the environment is more energetic and interactive (and even though the lightning talks were presentations, they felt more interactive), I feel very awake.
I have organized a number of meetings where we relied solely on OpenSpaces, which means that everyone in the meeting decided what was important to them and everyone at a particular OpenSpace was actively interested in the topic. These are the most nonstop energizing experiences I've had, and the only downside is that at conventional conferences like SD and even at PyCon, I am very aware of "energy gaps," which in the past I've simply accepted as inevitable and unavoidable. Once you've experienced an OpenSpace conference you wake up to the possibilities of what the word "conference" can really mean (both of the conferences I am holding this summer are OpenSpace conferences).
PyCon is more of an agile conference, and they are constantly re-evaluating things. One topic that came up was that a significant number of attendees who are newer to the language would like some sort of tutorial material to come up to speed so that the regular sessions would make more sense to them. So PyCon needs to become a little more like SD in that sense. SD could use more of the kinds of things that inject energy into the atmosphere that PyCon has. And, although PyCon already recognized this, both conferences need to do things that bring people together in the evenings. The PyCon folks tried to get a party sponsored this year, but I've never found a conference-wide party to be all that stimulating, and as they discovered it costs a lot and takes a lot of effort; the payoff isn't that great, in my opinion. A simple thing I've seen done at some conferences is just helping people get into groups to go out to dinner. These could organize around a speaker or a topic, and should probably have limits on size since dinner doesn't scale up so well. Another possibility is to have evenings that are exclusively OpenSpace oriented, since then there would be no competition with talks (perhaps at a conference, most or all of the OpenSpaces should be moved to the evening for this reason).
To me, the best things that happen at a conference are those that start conversations and interactions with people. After all, the reason that we travel from all over the place and come together in the same physical space is to connect. Especially with the internet, where we can do online everything that doesn't involve connection.
MindView Home Page
Last week at the SD conference Bill Venners heard from the people at Dice.com that the job postings this month are twice what they were last year at this time.
This week at the Python conference in DC we were handed a bag at registration that contained many sheets of paper from different companies saying "we're hiring Python programmers." This says a lot about both the economy and Python.
The conference has roughly 25% more people this year than last. We've outgrown the conference center at GWU, and will have to go to a larger facility.
The conference continues to be very good, very high energy. There is talk of adding a more introductory tutorial track next year, since it has more of the flavor of a technical conference and some folks would like to get up to speed on more of the fundamentals.
Guido Van Rossum gave his "State of Python" keynote this morning, and the formerly-named "optional static typing" has evolved nicely into something that seems more palatable, and will be renamed since it really isn't about static typing. Everything is still in the early stages but I think that something very interesting could result, that would NOT get in the way (which is what most people have been afraid of when the phrase "static typing" was used).
MindView Home Page
I'm attending and speaking at the Software Development Conference in Santa Clara this week (this is the same conference where, for many years, I chaired the C++ and Java tracks). Last night a group of us (myself, Bill Venners, Chuck Allison, Allen Holub, and Eric Evans, who wrote "Domain-Driven Design"), went to dinner at the White Lotus in San Jose, a place I try to get back to every time I'm in town.
That day I had seen, among other things, an agile talk Robert Martin and one by Mary Poppendieck. Although I've been following the printed literature, the last time I had seen any agile talks was at least a couple of years ago. What most impressed me by these two talks is the focus and the level of polish, in particular the shift to more evidence-based presentations. These are far more convincing and compelling arguments than the presentations that appeared when XP and agile initially appeared; I would classify those presentations as being more "enthusiasm-based."
Eric has been attending XP and Agile conferences and following everything more closely than I, and he said that someone at one of the recent conferences had made a presentation that pointed out that the early adopter phase had ended, and the current phase of adopters are more conservative and require clearer evidence in order to be convinced. Because I had already had my own XP-like experiences by the time XP appeared, the enthusiasm-based approach worked on me, but I found that these two presentations had a much more solid and mature feel to them. Of course, the fact that the presenters were very good made a big difference, as well.
MindView Home Page
I'm coaching the Adapters and Interfaces Sprint on Tuesday, March 22 at the Python Conference. If you are interested please add your name to the list so I'll know whether to actually do it.
MindView Home Page
I'm out in the boonies for a few days where there is only a phone line (and a slow one at that). My new notebook computer came with a bunch of free AOL hours so I thought I'd try that (although I used AOL many many years ago, I haven't paid attention since then). All I want to do is get on the internet, so I don't care. The signup process works fairly well, but I've noticed that in the last few days my mailblocks anti-spam service has been receiving messages but I've had only intermittent luck sending them. I tried emailing their tech support about this and they suggested that I have too many cookies in my browser. Of course I'm using a non-browser email client and so this is completely inappropriate feedback, but this kind of "support" is consistent with what I've gotten from them since AOL bought them.
So my scheme is to use the AOL SMTP address for sending mail, instead of the mailblocks address. But what could that be? I decide to try the online help, and it comes back and says "You will need to activate Java technology in your browser to use NetAgent - Java Customer Client." The notebook is very new and I haven't, in fact, installed Java yet. But I find this interesting because I can't imagine that most people who need help and click on this is going to have any idea how to "activate Java technology" in their browser. (When I did install Java, the AOL instructions didn't work).
I tried installing the RSSReader on this notebook, and it warned me that I would need .NET installed on the machine in order to use it. I hadn't installed .NET and so I thought I'd see how the system responded if I tried to do it anyway. It installed without any questions, and I discovered that somehow I already had .NET 1.1 on the machine. Without explicitly installing it. Apparently it comes as part of the service pack. Very convenient if you want to distribute a .NET application, but it certainly puts Java at a disadvantage. On the other hand, there are apparently large numbers of unwashed masses still using Windows 95 and 98, so developing any kind of application that doesn't run on those will be problematic.
MindView Home Page
In response to Destructors in GCed languages, Walter Bright added this point:
6) The destructors can automatically call the base destructors, and they do that. But they cannot automatically call destructors on the members. The reason is that class objects are by reference only, so the members are by reference, so the destructor cannot tell if someone else is holding that reference as well. So it can't call the destructor on them. (It could if it used reference counting memory management, but D uses mark/sweep.)
In C++, it's possible to embed member objects in their enclosing class, and so objects of that class clearly "own" the member objects and cleanup can occur deterministically. UML even has a way to graphically distinguish between embedded objects and shared objects.
As soon as you start sharing objects, however, you lose the determinism and so it would seem that there isn't a way to automatically call destructors for member objects.
But how useful is a destructor that does this? It only solves part of the problem, and leaves the rest to the programmer. I think this is why the Java designers decided to punt on the whole destructor issue, for the same reason as Walter gave, which is more generally "our garbage collector is not deterministic enough to know when to clean up objects."
Is this actually this issue, though? Suppose we separate the ideas of memory reclamation and object cleanup, and say that the nondeterministic garbage collector handles memory reclamation and some other mechanism handles object cleanup. This is what Java does, but you are provided with the finally clause in order to achieve non-memory object cleanup. D tries to go a step further and create a destructor mechanism, but stops before calling member object destructors and thus might do more harm (by implying complete destruction) than good.
Let's look at the implication that reference counting and garbage collection are the same thing. Reference counting can certainly be used as a garbage collection mechanism, as we see in Python. But reference counting is what its name implies: a way to keep track of the number of references there are to a particular object. The problem with calling destructors for shared objects is exactly this: you need to know whether there are any other references to an object before calling its destructor.
It's fairly easy to write a reference-counting implementation to keep track of the references to a shared object so that you can know when to call the destructor. And this runs within a system that has a separate garbage collector.
The downside is that the programmer is responsible for calling any "addRef" method, and it would be nice if it could all be automated.
If you distinguish between the garbage collector for memory reclamation, and reference counting for destruction of shared objects, I think it is possible to solve the automatic destructor problem for garbage-collected languages. Here's how it could work:
D goes further than just offering finally blocks, it also offers scoped destruction when the 'auto' storage class is used. The scoped destruction can be used for resource management just as in C++, there are other nifty uses for it such as timing code, see
www.digitalmars.com/techtips/timing_code.html.
It doesn't automatically resolve the issue of running destructors on members deterministically, but if you write your class putting such members as 'private' members, and don't create other references to them, you can use the 'delete' operator on them to deterministically clean them up.
I will be I will be talking about the new version of Java (J2SE5) and how very different it is in both features and attitude from previous versions of Java, next Friday (March 4), from 3-5 pm at NorthFace University. Free, open to all.
Details and Directions
MindView Home Page
Sometimes when I can't bring myself to work on Thinking in Java 4e anymore, I've learned a little about PHP. Sounds kind of sick, I know, but it's actually refreshing in a kind of "green fields, unlimited possibilities" way.
I skimmed through a couple of books before settling on John Coggeshall's "PHP 5 Unleashed." His voice is good, and the book has a feeling of being carefully crafted.
I had somehow gotten the impression that PHP was a kind of Perl derivative, probably from the '$' before the variables. As I learn about it, however, it seems more like C than anything else. C with '$'s in front of the variables.
It also seems very consistent and well thought-out. Language features seem to follow logically from each other and so far I haven't found anything particularly surprising -- no special cases. Of course, I am getting on board at PHP 5, which seems to have worked out all the kinks, and added objects, which seem to be an amalgam of C++ and Java (mostly Java) with '->' instead of '.' for member selection.
It feels slightly weird to say this, as if, for some reason, I shouldn't like PHP. But I do. It looks like they learned well from other languages, and doesn't seem hacky all (again, my perspective is PHP5).
Of course, I'm aware of the problems of mixing presentation with model, and that at some scale this will probably start causing problems. But I also see the value of it -- if you stay below a certain size and complexity, mixing the two makes it much easier to program. And for me, web stuff is just something I need to get done so that it works, in the most expedient fashion. So I think, when I start building dynamic pages, that I will give PHP a try. I don't know that this is where I'll end up, but it seems promising. Plus it has a boatload of people who are using it, and apparently lots of good libraries. The fact that it is supported on most web hosting providers has a lot of appeal, as well.
I'm not giving up Zope, though. For what I know how to make it do, it works fine. For now, however, I am giving up on making it past the first elbow of the Z-shaped learning curve. I've actually been around that particular bend several times, and slightly up the curve, but I've come to the conclusion that if you don't live and breathe Zope development, it's too complex to hold in your brain between one bout and the next. So I'll use Zope (and maybe graduate to Plone) for as much as I'm able, and switch to another technology like PHP or one of the Python web frameworks for building more dynamic content.
I'm not so sure this is a bad model. The argument for purity is that it's easier to use only one language. But if Zope works for me up to a point, and then it becomes much harder to build a page that, for example, stores its fields, than it is to use PHP, and if PHP is reasonably well-designed so that it stays in my brain between bouts of web programming, then I'll be more effective with a hybrid of technologies than I will by remaining pure for the sake of being pure.
The other factor is that I've realized that in my case, I don't build applications on my server that are going to run into scalability problems. But I do run into a big roadblock when trying to develop web applications using the more "pure" approach, and as a result I've been stymied more often than I've been successful.
MindView Home Page
One of the missing dimensions in this discussion about static and dynamic typing is where a language fits on the continuous spectrum of servant at one end and disciplinarian at the other. People who complain about having to spend too much time arguing with the compiler are wanting more servant and less disciplinarian, and those who feel that more static type checking will be helpful are asking for more discipline.
Different situations need different positions on this spectrum. I think the arguments have been ignoring this, and the all-important missing statement in brackets that should precede every declaration is: "[in my situation] more (static|dynamic) properties are better."
The situations are usually more about people than they are about programming. And for some reason programmers don't like hearing this, but the second missing dimension in this discussion is that programmers are different. There's a huge difference between novice programmers and the mysterious "5% who are 20x more productive than the other 95%."
This is probably a threatening thing to say because it can be tied up with things like money and social acceptance and the perceived quality of our (as Woody Allen said) "second-favorite organ." But why should someone who can swing a hammer believe that they have the same experience or ability of a master finish carpenter?
I remember coming out of school with a Master's degree in Computer Engineering (my undergraduate degree was in applied physics). I actually did know some things, and I could figure other things out. But my cubicle-mate, Brian, could build software. He had spent enough time thinking about it and doing it that he had perspective on the problem. That certainly didn't make me useless -- I created valuable things while I worked at that company. But at the time, we were just beginning to use pre-ANSI C for hardware programming, and there was very little type checking, and I could have used more. Of course, pre-ANSI C was not much more than a glorified assembly language and there was hardly any type checking at compile time, and none at runtime. This might be part of the confusion about dynamic languages, that it seems like you might be thrown back into working without a net, as we were doing then.
On numerous occasions I've heard Smalltalk programmers say that if a programmer starts with Smalltalk, it fundamentally shapes how they approach all programming (they also talk about unlearning the damage done by other languages in order to learn Smalltalk). I think this is probably true; most of the people I know who started with Smalltalk have a much better grasp of the fundamental concepts, and an ability to see what is simply the cruft of a particular language, than people who start with languages that are closer to the metal.
I started with assembly, then Pascal & C, then C++, then Java, then Python (ignoring a number of flirtations with scattered other languages like Prolog, Lisp, Forth, and Perl, none of which really took). So my experience with languages started with the cruft and the important concepts mixed together. Not only could I not tell one from the other (so they all seemed equally important) but the higher-level concepts were not initially available.
In C++, I started by creating my own containers because there was no STL for many years (the STL was actually added rather late in the standards process, and its initial goal was as a set of generic algorithms for functional-style programming, and not as a set of containers which is where it gets most of its use). All I wanted, initially, was a holder that would expand itself -- to a C programmer, stuck with fixed-size arrays, this seems rather revolutionary. And so it didn't seem strange that Java would have a separate library for containers. But Python doesn't even bother with fixed-size arrays. You just use a list, which is always there without importing any library, as common as air, as are dictionaries (Maps to C++ and Java programmers). Now, even sets are first-class objects. In C++ and Java, collections are intermediate level concepts, and so some programmers don't use them. It's quite difficult to know that some libraries are more important than others; in my initial experience with C, my tendency was just to write all my own code and not rely on library functions, so it was quite awhile before I understood that malloc() and free() had special importance, and the whole possibility of dynamic memory allocation (I really did come up the ladder from hardware). I now know intimately how lists and hashtables work, but I also wonder how different I might have been had I started with a language where dynamically-sized lists and dictionaries were just part of the fundamental toolset. My experience now, and what I see with those who started with Smalltalk, is that you take those concepts with you into languages like C++ and Java, and you are able to manipulate those languages more effectively as a result.
My point is that programmers are all on different positions on their own learning curve, and there's some intersection of that curve and the level of discipline that they and their team need. This is an issue that is based heavily in the human dynamic of the team. That is, it depends on who you have on your team and how they work together. Some teams benefit from the extra structure provided by a static language, others need the rapid abilities of a dynamic language.
However, much of the argument around static vs. dynamic languages seems to be about finding bugs. I will make two observations about this. First, both static and dynamic languages still produce bugs. And no matter how many static type checks a language includes, bugs still happen. I think there is a belief in some camps that we simply don't have enough static type checks in Java, and if we had more then eventually the compiler could guarantee provably-correct programs. And this leads to the second point: Java has a lot of dynamic checks. And it benefits from a lot of dynamic abilities, such as reflection. In fact, I think it could be safely argued that Java straddles the static and dynamic language worlds. For that matter, Python does a little bit of type checking during its compilation phase (and there is talk of adding more).
In the end, languages need, I think, to be opportunistic about when they can discover errors. And the best time to discover these errors is not always clear. Sometimes it's quickly obvious: it seems unlikely that array-bounds-checking at compile time is feasible, for example, so Java does it dynamically, at runtime (and C++ doesn't do it at all). Many of the arguments seem to be about when type correctness is established. The static approach is to force explicit declarations, the dynamic approach is to perform checks at runtime. In between is type inference, where you don't have to explicitly declare types but they're checked at compile-time anyway. An even more interesting approach would be a combination of type inference and dynamic checking, which produces the best of both worlds.
Perhaps what I most value about the dynamic language experience is that I feel it has opened my mind a bit more, indeed, as learning any language has. Even if much of your day-to-day programming is with a language like C++ or Java, I believe that learning a dynamic language will not only add another very valuable tool to your set -- and one which may allow you to solve "side problems" much more quickly -- but it allows you to go back to your "main" language with a new perspective, and one that will help you solve problems in a more effective and elegant fashion.
MindView Home Page
Alfie Kohn has an essay about how rewards are counterproductive. He's written numerous books on similar topics.
Malcolm Gladwell (author of The Tipping Point) has just published Blink, which also talks about things that we assume to be one way, and are actually completely different. I referred to a speech he gave which you can listen to online here.
For some reason I find this kind of thing fascinating. It's something about having our "natural" assumptions turned completely upside down. I guess it it regularly reminds me that the world is not so predictable/boring as we come to expect.
This is also the motivation for questioning many of our assumptions about programming. I've had personal experiences where my assumptions have been wrong -- where I've discovered that something that I thought was helping was holding me back.
My friend Daniel commented:
I read something a while back about giving a reward BEFORE you start. That someone gave small bags of M&M's to groups before they started, and they performed much better, because they were happier. They don't have to be big things, just something small and pleasant sets the brain in a different mode. I now cannot find that article, even with my desktop google!
One of the difficult challenges when discussing the productivity of dynamic languages is to make any sort of proof about said productivity. I believe this is because it's one of those synergistic, emergent systems, where everything taken together produces a surprising or unexpected result. But this makes it hard to come up with any kind of proof, and as a result we have a bunch of people who are primarily just speaking about their personal experiences trying to convince people who haven't had such experiences that it's a Good Thing ("Hey man, you gotta try some of this stuff!"). In our business, we are constantly being shilled into trying new products, technologies, etc., most of which have been a waste of time, some of which have been a collosal waste of time AND money. So we've learned to ignore most of these claims.
Oliver Steele offers his theory about why dynamic languages are more productive, folding in issues about testing, in Test versus Type.
MindView Home Page
Based on a conversation with Walter Bright, the creator of the D Language, a summary of my thinking on destructors in GCed languages:
1) If you have scoped objects, they can automatically call destructors at the end of the scope, because it's deterministic. However, this could cause confusion with:
2) Heap based objects should not have destructors called automatically, because garbage collection is not deterministic. If cleanup other than memory release is necessary, the programmer should be forced to decide when this happens, presumably with finally clauses. If cleanup is necessary, it must be determined by the programmer because the GC is nondeterministic. This leads to:
3) The value of the destructor for GCed objects is that it automatically calls base-class destructors in the proper order (most-derived first). Java abdicates any responsibility for cleanup other than memory release, which forces the programmer to generate their own "cleanup" methods and to be dilligent about proper order.
4) The biggest remaining problem is member-object cleanup. Member objects need to be cleaned up in reverse order of initialization. A destructor should automatically call member-object destructors, and if you don't expect the GC to call the destructor (which doesn't work anyway), you can assume that all member objects are still alive when the destructor is called. Therefore it is safe for the member object destructors to be called. Without this the solution is incomplete.
5) The one thing I could see adding to D is functionality to verify that destructors have been called if they exist. This would close the loop and solve the problem of destructors with GCed objects. The key is that the GC can't do it (as the Java designers learned with the multi-year finalize() debacle). But the GC can verify that destructible objects have had their destructors called by the time the GC is releasing the object.
Note that Python is able to get away with automatic destructor calls because it uses reference counting and thus the moment the reference count goes to zero the destructor is called. I.e.: it's deterministic in Python.
MindView Home Page
I was looking at Sun's Core Java Technologies Tech Tips for January 4, 2005, and I saw that Gilad Bracha is described as a computational theologist. In the Wikipedia, Theology seems to be about religion. Does this means that Gilad is someone who tends to get into religious debates about programming? Or is this a pay grade at Sun? Perhaps someone can explain this term to me.
[Later]
I think I see what Gilad is trying to say. One of the things I respect about the Jewish tradition is that they are taught not to accept the holy books at face value, but instead to question and argue about them. I think he means that he brings this approach to the design of programming languages. The risk in using such a term is that it could be misunderstood all over the place.
MindView Home Page