Log in

About this Journal
Current Month
Feb. 22nd, 2017 @ 08:24 pm More musing on meta-history in source control
I was reflecting further on my previous comments on meta-history in source control.

One use case I imagined was that you can rebase freely, and people who've pulled will have everything just work assuming they always pull rebase. But I may have been too pessimistic. A normal pull rebase may usually just cope with the sort of rebasing people are likely to have done upstream anyway.

The other question is, are you losing old history by rebasing older commits? Well, I don't suggest doing it for very old commits, but I guess, you're not losing any history for commits that were in releases.

Although that itself raises a reason to have a connection between the new branch and the old: you shouldn't be rebasing history prior to a release much (usually not at all. maybe to squash a commit to make git bisect work?) But if you do, you don't want too parallel branches with the same commit, you want to be able to see where the release was on the "good" history (assuming there's a commit which is file-identical to the original release commit), and fall back to the "original" history only if there's some problem.

And as I've said before, another thing I like is the idea that if you're rebasing, you don't have a command that says "do this whole magic thing in one step", you have a thing that says "construct a new branch one commit at a time from this existing branch, stopping when there's a problem", and there is no state needed to continue after resolving a problem, you just re-run the command on the partially-constructed new branch. And then can choose to throw away the old branch to tidy up, but that's not an inherent part of the commadn.

You can also comment at http://jack.dreamwidth.org/1018447.html using OpenID. comment count unavailable comments so far.
About this Entry
Feb. 16th, 2017 @ 11:35 pm Getting 90% of things done
I recently realised something which many people had told me before, but I hadn't had the prerequisites in place to grok.

If I have a list of tasks to do today, and a rough breakdown of how long I expect them to take, and one is overrunning, it may make sense to take a break from *that* one and do all the others. And then start afresh on the longer one tomorrow. (At least, except when that one is SO urgent you just need to work on it alone until it's done.)

Partly because, even if your main work is overdue, there's always small other stuff (answering emails, dealing with admin, dealing with requests from other people) that it's good if it still gets done. And better that it gets SOME attention, even if that's just "I got your email but don't have time to reply in detail" rather than none.

And partly because, if something runs late, it often then runs MORE late, so (a) it will probably still be late even if you do drop everything else and (b) if you don't it will usually eat up ALL of the time.

And partly because, 10% of the tasks often take 90% of the time, and sometimes that's the most important task and sometimes it isn't, so if you advance on *all* tasks, you may find you've done all the most important ones and may never do the overrunning one at all.

I think that never really worked for me before, because tasks were ALWAYS overrunning, not because they took too long, but because I was scared of starting them. And the only way of starting them was forcing myself to, if I did other stuff, I would never start at all. (Which is ok if that task can be dropped, but not if it's the main thing I should be doing.) So my main task always overran, and the other stuff never got done till it got urgent. Now I've got better at not doing that :)

OTOH, the system also breaks down when you have too much stuff coming in to do all of it, and you don't have time for even the most basic of reaction to stuff people are thrusting onto your plate. At that point, you need to adopt a "don't have time for this" and "see if this goes away by itself before responding" attitude (or get an assistant).

You can also comment at http://jack.dreamwidth.org/1017878.html using OpenID. comment count unavailable comments so far.
About this Entry
Feb. 16th, 2017 @ 10:01 pm Duck Typing
Tags: ,
I like the principle of duck typing.

Roast it if it looks sufficiently duck-like. Don't worry about whether it's officially a duck, just if it has the relevant features for roasting.

However, I don't understand the attachment to getting 3/4 of the way through the basting, stuffing and roasting project before suddenly discovering that you're trying to crisp a small piece of vaguely duck-shaped ornamental stonemasonry.

I agree with (often) only testing for the *relevant* features of duck-ness. But it seems like the best time to test for those relevant features is "as soon as possible", not "shut your eyes, and charge ahead until you fail". Is there a good reason for "fail fast, except for syntax errors, those we should wait to crash until we're actually trying to execute them"?

I've been working on my non-vegetarian metaphors, how did I do? :)

You can also comment at http://jack.dreamwidth.org/1017576.html using OpenID. comment count unavailable comments so far.
About this Entry
Feb. 16th, 2017 @ 09:46 pm Gym
Tags: ,
Went to the gym attached to the hotel and spa in bar hill.

It's small, but fairly nice and has a swimming pool with jacuzzi and sauna because spa.

It was busy at 5:30, but fairly quiet by the time I left.

Of course, it's a bit ridiculous starting to go to the gym instead of running outside just when the evenings are getting lighter. But I felt that if I was driving to work, it would be easier to arrange than jogging (and useful because I'm not doing cycling about).

And in fact, I felt like the treadmill and weights both worked my muscles harder than I'd been managing without equipment. Which is good, because hopefully I will start improving again. But bad, because I'd hoped I'd have figured out how to improve without them by now.

They rent towels for 50p. I'm trying to decide if it's more efficient to bring one or rent one. I like the no-hassle of picking one up, and not having several towels drying at home and trying to decide if I can reuse them or if they need to be in the washing. And not to add several towels to the washing load. But if I'm driving, the overhead of bringing a towel as well as gym things is a lot lower. And even if I needed to buy a couple of extra bath towels they would probably pay for themselves. Any thoughts?

You can also comment at http://jack.dreamwidth.org/1017741.html using OpenID. comment count unavailable comments so far.
About this Entry
Feb. 15th, 2017 @ 10:59 pm Further parent scope iteration
Thanks to everyone who commented on the previous post, or posted earlier articles on a similar idea. I stole some of the terminology from one of Gerald-Duck's posts Simon pointed me to. And have tried to iterate my ideas one step closer to something specific.

Further examples of use cases

There are several related cases here, many adapted from Simon's description of PuTTY code.

One is, in several different parts of the program are a class which "owns" an instance of a socket class. Many of the functions in the socket also need to refer to the owning class. There are two main ways to do that. One way is every call to a socket function passes a pointer to the parent. But that clutters up the interface. Or the socket stores a pointer to the parent initialised on construction. But there is no really appropriate smart pointer, because both classes have pointers to each other.

A socket must have an owner. And classes which deal with sockets will usually have exactly one that they own, but will also often have none (and later open one), or more than one.

And because you "know" the pointer to the parent will never be invalid as long as the socket is owned by the parent, because you never intend to pass that pointer out of that class, but there is no decent language construction for "this pointer is a member of this class, and I will never copy it, honest" which then allows the child to have a provably-safe pointer to the parent. This is moot in C if you don't have smart pointers anyway, but it would still be useful to exactly identify the use case so a common code construction could be used, so programmers can see at a glance the intended functionality. It would be useful to resolve in C++. And there are further problems in rust, where using non-provably-safe pointers is deliberately discouraged, and there's a greater expectation that a class can be moved in memory (and so shouldn't contain pointers to parts of itself).

The same problem is described too different ways. One is, "a common pattern is allocating an owned class as a member of another class, the owned class has an equal or shorter lifetime than the owner, and a pointer back to it which is known to always be valid with no pointer loops", or a special sort of two-way pointer, where one is an owning pointer and the other is a provably-valid non-owning pointer. Another is "classes often want to refer to the class that owns them, or the context they were called from, and there is no consistent/standard way of doing that."


Using C++ terminology, in addition to deriving from a class, a class can be declared "within" another class, often an abstract base class aka interface aka trait of the actual parent(s).

class Plug
virtual void please_call_me_from_socket(int arg1)=0;

class Socket : within Plug
// Please instantiate me only from classes inheriting from Plug
void do_something();
int foo;

The non-static member functions of Socket, in addition to a hidden pointer parameter identifying the instance of socket which is accessed by "this->", has a second hidden parameter identifying the instance of Plug from which is was called, accessed by "parent->please_call_me_from_socket(foo)" (or parent<Plug>->please_call_me_from_socket(foo) or something to disambiguate if there are multiple within declarations. Syntax pending).

Where does that pointer come from? If it's called from a member function of a class which is itself within Plug, then it gets that value. That's not so useful for plug, but is useful for classes which you want to be accessible almost everywhere in your program, such as a logging class.

In that case, you may want a different syntax, say a "within" block which says all classes in the block are within an "app" class, and then naturally all pass around a pointer to the top-level app class without any visual clutter. And it only matters when you want to use it, and when you admit the logger can't "just be global".

For Socket, we require than member functions are Socket are only called from member functions of Plug (which is what we expected in the first place, but hadn't previously had a way of guaranteeing). And then the "parent" pointer comes from the "this" pointer of the calling function.

There is probably also some niche syntax for specifying the parent pointer explicitly, if the calling code has a pointer to a class derived from Plug, but isn't a member function, or wants to use a different pointer to this. The section on pointers may cover this.

Pointers, Callbacks, Alternatives, and Next StepsCollapse )

You can also comment at http://jack.dreamwidth.org/1017241.html using OpenID. comment count unavailable comments so far.
About this Entry
Feb. 2nd, 2017 @ 04:10 pm Have more than two scopes
The idea in my last post got a bit buried, so I thought it through some more. I'm still mulling it through, I'm not sure if it makes sense.

Imagine you have a computer game. What are your classes like? Often you have a top-level "program" class. And maybe a "current game" class and "current level" class. And a whole bunch of stuff for each object in the level (whether those are separate class types, or just structs with an enum for type, or whatever).

You often have some data or functionality which is specific to your program, but should be accessible in many parts of the program. Or specific to the current game, or current level. Eg. many different events may add a score. Everything might write to a log file. Commonly functions on objects want to look at other "nearby" objects.

Currently, you basically have a choice of two scopes of values which are available to a function. Global variables, and variables in the current class. What I'm suggesting is that two is the wrong number of scopes to have. If you have two, one or two more is likely to be useful.

Some data naturally lives in a level neither global, nor in in the current class, but in the "program" class. And is visible to all classes "in" the program class.

What does "in" mean? Possibly "declared with a special syntax referencing the program class". But it might be better to treat it like a namespace or module, that says "everything in here can only be used by this class (or compatible classes like mocks)", and all member functions get *two* hidden parameters, one for the program, and one for the "this" pointer. Or four, if you have "program", "current game", and "level".

It's easy to imagine your program and say, "but there'll only be one current game at once". But once you SAY that, you can imagine why there wouldn't be. And then any values associated with that need to not be just shoved in the program class or in global scope, but managed properly.

And you CAN provide them to the "children" classes by giving the child class a pointer to the correct parent . You definitely should decide what should be visible everywhere and what only needs to be some places. But I'm suggesting it would be *clearer* not to do that, but explicitly choose what should be shared.

ETA: Simon points me to a post of gerald-duck's I read ages go but seem to have partially re-invented: http://gerald-duck.livejournal.com/710339.html

You can also comment at http://jack.dreamwidth.org/1016554.html using OpenID. comment count unavailable comments so far.
About this Entry
Jan. 30th, 2017 @ 05:35 pm Rust: contributing and branchin
I was pleasantly surprised how easy it was to contribute to rust. It seems like there's a combination of things.

I don't exactly know who the driving forces are in the project. But I think several people are employed by Mozilla to work on rust development, which means there is some full-time work, not only scrabbled moments.

There seems to be a genuine commitment to providing an easy on-ramp. Everything in github seems fairly up-to-date, which makes it a lot easier to get an idea of what's what. Bugs/issues are sorted into many categories, including ones that are easy, suitable for newcomers, which is very welcoming.

There is a bot which takes pull requests and assigns them to a reviewer, so most don't just languish with no-one accepting or rejecting them. The reviewer is chosen randomly from a pool appropriate to the component, and reassigns it if someone else would be better.

Even just spending a couple of days pushing the equivalent of a "hello world" patch through (what is the term for "the effort to make a one-line change with no significant code change"?), it felt like I was part of a project, with ongoing activity about my contribution, not someone screaming well-meaning suggestions into a void.

This isn't rust-specific, but it was the first time I used github for much more than browsing, and it was interesting to see how all the bits, code history, pull requests, etc interacted in practice.

Rust itself had an interesting model. A reviewer posts an approval on the pull request. *Then* a bot runs tests on all approved requests in descending order of priority, and merges them if they pass.

That means, the default assumption is that if a commit to master fails a test for some platform, nothing needs to be rolled back -- further pull requests continued to be tested and merged (assuming they don't gain any conflicts). And "master" is always guaranteed to pass tests.

Currently patches are either tested individually, or ones with inconsequential risks (documentation changes and the like) are tested in a batch. It seems to work well. It relies on the idea that most patches are independent, that they can be merged in any order, which usually seems to be true.

If you took the idea further, you can imagine ways of making it less of a bottleneck. Rather than just testing all patches which happen to be submitted at the same time, you can easily imagine a tier system. Maybe priority. Or maybe, have minor tests (eg. just that everything compiles and some basic quick tests of functionality which is known to have changed) to gate things through a first stage, and find problems quickly, and then a second stage which catches obscure errors but is ok to test multiple patches at once, because it doesn't usually fail.

In fact, I can't imagine working *without* such a system. At work we have a nightly build, but it would have been easy to add a tag for "most recent working version", and that never quite occurred to me, even as I suggested other process improvements.

You can also comment at http://jack.dreamwidth.org/1016064.html using OpenID. comment count unavailable comments so far.
About this Entry
Jan. 30th, 2017 @ 12:20 pm Productivity changes
Tags: ,

Last year, I decided to try having month-by-month goals instead of trying to do new years resolutions.

Nov was NaNoWriMo which was what gave me the idea. That was a big commitment, which I think averaged out to about 2h per day, with some "thinking time" on top. Dec was to recover. Feb will be "start new job".

Jan was "learn some rust, if possible contribute to rust compiler". That was a bit speculative, I wasn't sure how big a goal was reasonable. But it turned out fairly well. I think I got a reasonable handle on the basic syntax, and the borrow checker concepts which most people find a hurdle to getting to know it. I build a couple of "hello world" programs to be sure I understood the building and packaging system.

And I built the compiler from source, and submitted a pull request to fix one of the "easy" documentation issues from the issue tracker, and learned how the project was organised. Which is now accepted!

So I think on balance that was about the right amount and specificity of goal. And I count it as mostly a success.

I reckoned the time spent stacked up something like 1 week of work, minus overhead faff, was about the equivalent of an intense weekend hackathon, or a not-very-intense project over a month. Nanowrimo was about twice that (more on some days likely). And some projects lend themselves to a brief burst of activity and others to a longer steady progress.

I'm simultaneously pleased that I *can* expect to focus energy on some projects and actually get somewhere with them. But also depressed that there's only so many months and each lets me achieve comparatively little.

I have lots of ideas of what I might do, but not sure what is most worthwhile to spend that effort on. Some coding projects. Some self-improvement projects. Some social things.

Daily todos

I shifted my daily todos a bit incorporating some ideas from bullet journals (as linked by ghoti).

I started keeping my daily todo-list IN my diary, and when I've done an item, changing the "-" bullet point to an "x" and moving it down to the bottom of the list. So what I'm GOING to do naturally becomes a diary.

I also started, instead of having subheadings, having a few different types. "=" bullet point for largish task. "-" for anything small but needs to be today. "s" for social-type task. (todo and social get postponed in different circumstances and consume energy in different ways.)

It feels easier to plan what I WANT to do, without feeling like I've failed if I don't do all of it.

I also started keeping my actual diary in multiple bullet points with a different bullet, instead of strung together. I'll see how that goes.

I feel like I'm slowly re-evolving a system lots of people already recommended to me. But I couldn't just *do* it, it depends on having confidence that putting things in a list actually works, and I've only slowly acquired that.

Likewise, maybe I don't need to record so much. But doing so was a step in the process of not worrying about it so much. And what's useful I keep, and what I don't need I've got better at just deleting, and not thinking "but I might need that one day".

Similarly, I keep a parallel diary I call my therapy diary for rants where I know they won't seem as persuasive in future but I have to make them. "WHY WHY WHY can't I just do X without screwing it up" "why does y keep going wrong". "this happened and now I feel really bad about it". The idea was, I'd think through the things later and come to terms with them. But actually just writing them down helped a lot. Now I've ranted in it much less often that I did to start with.

You can also comment at http://jack.dreamwidth.org/1015955.html using OpenID. comment count unavailable comments so far.
About this Entry
Jan. 29th, 2017 @ 10:01 pm Rust: Different uses for pointers, and a weird suggestion for new syntax
Tags: ,
This isn't solely about rust, but it made me think about something I wasn't really aware of. There's several common uses for pointers. The four uses themselves are nothing particular, but I'm interested in thoughts about the speculation about #3 and #4.

1. Heap allocation.

If you allocate a value on the heap, not the stack, you need to refer to it by a pointer. And if you're using a language other than C, automatically de-allocate it after the stack-allocated pointer goes out of scope (either immediately, using a smart pointer in C++, or eventually, in a garbage collected language).

If that's *all* you want to do, you can hide the issue from the programmer completely if you want to, as with languages that expect heap-allocation by default, and you're just supposed to know which copies produce independent duplicates of the same data and which copies refer to the same common data.

In rust, this is a box (?)

2. Pass-by-reference.

If you pass a value to a function and either (a) want the function to edit that value or (b) the value is too large to copy the whole thing efficiently, you want to pass the value by reference. That could be done either with a keyword which specifies that's what happens under the hood, or explicitly by taking a pointer and passing that pointer.

In rust, you pass a ref (equivalent to a C++ reference or pointer), but there are various compile time checks to make sure the memory accesses are safe.

3. I need access to this struct from various different parts of my program.

Eg. a logging class. Eg. a network interface class. Each class which may need to access functionality of those classes needs a way to refer to them. There's a few ways of doing this, which are good enough, although not completely satisfactory.

You can make all those be static. But then there's no easy way to replace them in testing, and there's problems with lifetimes around the beginning and end of your program. You have to be careful to initialise them in the right order, or just assume you don't use them around the time they may be invalid (but that may throw up lots of errors from lint or rust compiler).

You can pass them in as arguments to every function. But that's clunky, and involves a lot of repetition[1]. However, see the weird suggestion at the end.

Or you can just make sure each class has a pointer to the necessary classes (or maybe to a top-level class which itself has pointers or members with the relevant classes), initialise it at class construction. However, this has *some* of the problems of the above two possibilities, it's less easy to replace the functions for testing, and it's somewhat redundant. This one is what's a little weird in rust, I think you have to use objects which are declared const, but actually have run-time-checked non-const objects inside ("interior mutability"). Again, see the weird suggestion at the end.

4. I have some class which contains many sibling objects which need to know about each other.

This *might* be a data structure, if you're implementing a vector, or a doubly-linked list, or whatever for a standard library. Probably not, those are usually implemented with old-school unchecked pointers like in C, and you just make sure you do it right. But it would be *nice* if you could have efficiency *and* checking.

More commonly, there's something like "a parent class with several different subsystems which each need to call functions in different ones of them". Or a computer game where the parent object is a map of tiles, where each tile contains a different thing ("wall" or "enemy" etc), and the types want to do different things depending what's adjacent to them.

In this case, my philosophy has slowly become, as much as possible, have each class return...something, and have the parent class do any glue logic. Which makes the coupling much less tight, ie. it's easier to change one type without it having baked in knowledge of all parent types and sibling types. But doesn't work if the connections are too complicated. And even if it works, gives up some of the flexibility of having different functions for different types of child type, because a lot of functionality has to be in the parent (where "do a different thing" may be "switch statement" not "child pointer derived from base/interface type and function dynamically dispatched". Again, notice, this is functionally very very similar, the question is about what's easy to read and write without making mistakes.

Again, see weird suggestion below.

A weird suggestion for new syntax

This is the bit that popped into my head, I don't know if it makes sense.

We have a system for encapsulating children from parents. The child exposes an interface, and the parent uses the interface and doesn't worry about the implementation. But here we have children who *do* need to know about parents. One option is to throw away the encapsulation entirely, and put things in a global scope.

But how about something inbetween?

Say there is a special way of declaring type A to be a parent (or more likely, an interface/base type which exposes only the functions needed, and an actual class which derives from/implements that), and B1, B2, B3 etc to be children types, types which are declared and instantiated from A.

Suppose our interface, A, exposes a logging member function or class, and two members of types B1 and B2 (because those are expected to be needed by most of the children).

And then, you can only declare or instantiate those children B1, B2, B3 etc in A or a member function of A (that is, where there is a this or self value of type compatible with A). And whenever you call a member function of one of those children, just like that child is passed along in a secret function parameter specified with a "this" or "self" value, there is a similar construct (syntax pending) to refer to members of A.

So, like, "b.foo(x,y)" is syntactic sugar for "foo(b,x,y)" where b becomes "this" or "self", make "a.b.foo(x,y)" syntactic sugar for "foo(a,b,x,y" where b becomes "self" and a becomes "parent" or "a::" or whatever.

Basically, ideally you'd ALWAYS have encapsulation. But sometimes, you actually do have a function you just want to be able to call from... most of your program. Without hassle. You know what you mean. But you can't easily specify it. So it sometimes ends up global. But it shouldn't be *completely* global. It should be accessible in any function called from a top-level "app" class or something, or any function of a member of that, or a member from that, if they opt in.

[1] Repetition

Everyone knows why repetition is bad, right? At best, you put the unavoidably-repeated bits in a clear format so you can see at a glance they're what you expect and have no hidden subtleties. But even above the arguments against, even if people are happy to copy-and-paste code, writing out extra things in function signatures drives people to find any other solution, even crappy ones.

You can also comment at http://jack.dreamwidth.org/1015684.html using OpenID. comment count unavailable comments so far.
About this Entry
Jan. 28th, 2017 @ 02:42 pm Rust: Borrow Checker
Tags: ,
Const values

Last time I talked about lifetimes. Now let me talk about the other part of references, ownership and borrow checking.

If you're dealing with const values, this is similar to other languages. By default, one place "owns" a value. Either declared on the stack, or on the heap (in a Box). Other places can be passed a const reference to that value. As described with lifetimes, rust checks at compile time that all of those references are finished with before the original goes out of scope. When the original goes out of scope, it's deallocated (from stack or heap).

Alternatively, it can be reference counted. In rust, you can use Rc<Type> instead of Box<Type> and it's similar, but instead of having a native reference to the value, you take a copy of the Rc, and the value is only freed from the heap when the last Rc pointing to it disappears.

One reason this is important is thread-safety. Rc isn't thread safe, and rust checks you don't transfer it to another thread for that reason. Arc changes reference count atomically so *is* thread safe, and can be sent to another thread. (It's a copy of the Arc that's sent, but one that refers to the same data.)

Const references can't usually be sent between threads unless the original had a lifetime of the whole program (static), because there's no universal way to be sure the thread is done with it, so it's always illegal for the original owner to go out of scope (?) But threads with finite lifetimes are hopefully coming in future (?)

Non-const values

A big deal in rust is making const (immutable) the default, and declaring non-const things (mut). I think that's a good way of thinking. But here it may get confusing.

You can have multiple references to an immutable value. But in order to be thread safe, you can only have one *mutable* reference. Including the original -- it's an error to access the original during the scope of a mutable reference. That's why it's called a "borrow" -- if you make a mutable reference to a value, you can only access the original again once the reference goes out of scope.

But a point that's less well agreed is how useful this is when you don't pass anything between threads.

One argument is that you might be able to have a pointer *to* a value that you then mutate, but if it's something like a vector, you can't have a pointer/reference to a value in it because that might have been invalidated. And even if you have an iterator which could in theory be safe (eg. the iterator contains an index, not just a pointer), you still need to check for the iterator being invalid when it's used, which reduces various optimisations.

Another argument, that I found more interesting, is that even if the value isn't invalidated in a memory-safety sense, if you change the value in two disparate parts of code (say, you loop through all X that are Y calling function Z, and function Z in turn calls function W which does something to some X, including the ones you're iterating through), it's easy for the logic you write to be incorrect, if you can't tell at a glance which values might be changed half way through your logic and which won't be.

I found that persuasive as a general principle. Though I'm not sure how practical it is to work with those constraints in practice, if they're generally helpful once you know how to work with them, or if they're an unnecessary impediment. Either way, I feel better for having thought about those issues.

Workaround, interior mutability

"Interior mutability" is feature of rust types (Cell and RefCell), which is a bit like "mutable" keyword in C++: it allows you to have a class instance which the compiler treats as constant, (eg. allowing optimisations like caching return values), but does something "under the hood" (eg. the class caches expensively calculated results, or logs requests made to it, or keeps a reference count).

There's a couple of differences. One is, as I understand it, you don't just write heedlessly to the mutable value, rather rust checks at run time that you only take one mutable reference to it at once. So if you screw up, it immediate panics, rather than working most of the time but with subtle bugs lurking.

But it's also the case that if you do want a shared class accessed by many parts of your program (a logging class say, is that a reasonable example?), rust encourages you to use interior mutability to replicate the default situation in C or C++, of having a class multiple different parts of your program have a pointer through which they can call (non-const) functions in it.

I have more thoughts on these different ways of using pointers maybe coming up.

You can also comment at http://jack.dreamwidth.org/1015438.html using OpenID. comment count unavailable comments so far.
About this Entry