Five NoodleBoard Facts
I thought it might be fun to publish a few facts
about our OS X Dashboard Widget NoodleBoard.
1. There are just shy of a thousand registered boards being shared by about three times as many users.
2. This equates to about 1GB per week of board data
3. Two percent of requests come to us via Apple Computer Inc. This actually makes them our number one user just ahead of Southwestern Bell and a bunch of US ISPs.
4. The average board size is about 3k when encrypted
5. Our busiest hour of the day is 2200 GMT
I’m currently working with some self-volunteered beta testers on a new version of the widget which should be out in the next week or so.
1. There are just shy of a thousand registered boards being shared by about three times as many users.
2. This equates to about 1GB per week of board data
3. Two percent of requests come to us via Apple Computer Inc. This actually makes them our number one user just ahead of Southwestern Bell and a bunch of US ISPs.
4. The average board size is about 3k when encrypted
5. Our busiest hour of the day is 2200 GMT
I’m currently working with some self-volunteered beta testers on a new version of the widget which should be out in the next week or so.
|
Damage Limitation Patterns
It's a fact that there are certain systems you can
implement to improve the chances of a software
project succeeding, Joel Spolsky lists 12 in his
essay 'The Joel Test: 12 steps to better code'
What I want to talk about here is how occasionally, one or more of your systems is going to break under the strain. In an ideal world this shouldn't happen and possibly wouldn't if you ran your own company (like Joel does), but in the harsh realities of the market people do unusual and sometimes stupid things such as fixing a bug in the wrong branch (or worse on a live server) and without an associated bug report filed anywhere.
Whilst everyone knows that this is a bad idea, sometimes the pressure on developers from other influences can be immense and not everyone's main motivation is to have properly working, well designed software. Managers, for example, often take on new hires simply because they have the budget in place today and might not in the future. For them having a big team is more important than having an effective one. This is not always necessarily a bad thing (although it usually is) but merely an example of where working, well designed software is not the primary consideration.
So, something in your perfect software factory breaks and it's almost always going to happen at a time of great stress, perhaps in the week before a major release, and no one really has the time to fix it at the moment. Around this time you'll likely hear people saying things like, 'What's the point in using CVS it can't cope with such and such a thing.' and 'I've never seen one of these systems work properly in the real world.'
Don't lose faith.
I've seen this happen so many times that I'm coming round to believe that the point of these systems actually is to fail. Thinking of them as barricades holding back the hordes of chaos is probably a good position to adopt. A few barriers might fall and you can put them back to normal after crazy week is over (and the hordes are off planning their next assault - I mean ‘project’), but imagine the state you'd be in if your entire project ran like that from beginning to end.
Striving for a sane build environment is a bit like painting the Forth Bridge, it's always only half way done and the other half looks like crap but without trying the bridge would have fallen down long ago.
What I want to talk about here is how occasionally, one or more of your systems is going to break under the strain. In an ideal world this shouldn't happen and possibly wouldn't if you ran your own company (like Joel does), but in the harsh realities of the market people do unusual and sometimes stupid things such as fixing a bug in the wrong branch (or worse on a live server) and without an associated bug report filed anywhere.
Whilst everyone knows that this is a bad idea, sometimes the pressure on developers from other influences can be immense and not everyone's main motivation is to have properly working, well designed software. Managers, for example, often take on new hires simply because they have the budget in place today and might not in the future. For them having a big team is more important than having an effective one. This is not always necessarily a bad thing (although it usually is) but merely an example of where working, well designed software is not the primary consideration.
So, something in your perfect software factory breaks and it's almost always going to happen at a time of great stress, perhaps in the week before a major release, and no one really has the time to fix it at the moment. Around this time you'll likely hear people saying things like, 'What's the point in using CVS it can't cope with such and such a thing.' and 'I've never seen one of these systems work properly in the real world.'
Don't lose faith.
I've seen this happen so many times that I'm coming round to believe that the point of these systems actually is to fail. Thinking of them as barricades holding back the hordes of chaos is probably a good position to adopt. A few barriers might fall and you can put them back to normal after crazy week is over (and the hordes are off planning their next assault - I mean ‘project’), but imagine the state you'd be in if your entire project ran like that from beginning to end.
Striving for a sane build environment is a bit like painting the Forth Bridge, it's always only half way done and the other half looks like crap but without trying the bridge would have fallen down long ago.