I've always liked Ada. She was the first to see the musical potential of computers. Here, look at this old blog post in which I mention her. As proof! Homebrew Adventures ;)
A tip of my hat to Ada then. And tips all round to that small elite of nerdy girls who hold their own in a man's world.
Wednesday, 24 March 2010
Tuesday, 9 March 2010
OS X Ubuntu USB Creator
I've spent some time attempting to make a Cocoa that lets you burn an Ubuntu ISO to a USB memory stick on OS X.
I think I've got as far as I'm gonna get with it now, sadly.
How far I got...
I think I've got as far as I'm gonna get with it now, sadly.
How far I got...
- The UI is pretty concise
- A USB stick gets detected when plugged in
- The right signal is sent to the dd process and parses the progress for the progress bar
- It's really SLOW. I'm not familiar with how the dd command line utility works -- people keep talking about 'eraseblocks' and suchlike and my eyes glaze over..
- It doesn't detect and inform the user when the write is complete
- It doesn't seem to create a bootable device
- I can't see how to automatically remount a device after I've unmounted it with diskutil
- I'm not amazingly confident that once I've detected the device a volume resides on I'm not then going to end up destroying all the data on the wrong drive
The project is on Launchpad here: https://code.launchpad.net/~michaelforrest/ubuntuusbcreator-osx/trunk
It would be pretty awesome if you were able to help out.
Friday, 5 March 2010
Making the computer work for YOU
Yukihiro Matsumoto put it better than I ever could:
My experiences with the Ubuntu codebase so far have been that far too much emphasis is being placed on how the machine works, with little emphasis on modelling the real world or coming up with APIs and toolkits that work nicely from a human perspective.
We have driver code with giant switch statements to detect pin configurations of different chipsets on different motherboards, with new lines being hacked in whenever a new laptop comes out or some manufacturer decides to cut corners by wiring some connector to pin 3 instead of pin 2.
At the other end of the spectrum, we have inflexibly hacked UI code where any change to a view requires changes in two other files, often involving adding in mappings that could be implicitly determined or even removed entirely with a better approach to templating.
The phenotypical results of the underlying architectural problems are rife. My file-system suddenly became read-only yesterday. I hadn't done anything - I'd just walked across the room to talk to someone for two minutes. Sure, it's alpha, but seriously... how could this happen? Another time, during a UDS session while listening to something in my headphones, not realising until somebody told me that, embarrassingly, the sound was also coming from my laptop speakers.
Craig Larman says this: "We do not build software. The bricks are laid when we hit compile. We are designers." We design the architecture. We design the interfaces. We invent ways to model reality in code.
If you write code by laying bricks - by placing one switch condition after another, you are not programming, you are doing what the computer should be doing. If you copy-and-paste, you are doing what the computer should be doing.
Object-oriented code can be understood as a way of creating structures that allow the computer to reuse code across all the different places it needs to, allowing you to edit the code only ever in one place. If any change to an application's behaviour requires the same code to be edited in two places, then your code design is wrong.
Dynamic languages vastly simplify the process of constructing concise code. In verbose languages like Java, C# or ActionScript 3, the programmer's intent is buried beneath layers of boilerplate code, braces, nestings and mappings (usually created automatically by the IDE these days). Python, Ruby, or even Processing, allow us to strip away all this noise and crystalise our intentions.
Michael Forrest's Three Rules of Programming
Michael Forrest's Three Rules of Workflow
Often people, especially computer engineers, focus on the machines. They think, "By doing this, the machine will run faster. By doing this, the machine will run more effectively. By doing this, the machine will something something something." They are focusing on machines. But in fact we need to focus on humans, on how humans care about doing programming or operating the application of the machines. We are the masters. They are the slaves.Ubuntu developers take note.
My experiences with the Ubuntu codebase so far have been that far too much emphasis is being placed on how the machine works, with little emphasis on modelling the real world or coming up with APIs and toolkits that work nicely from a human perspective.
We have driver code with giant switch statements to detect pin configurations of different chipsets on different motherboards, with new lines being hacked in whenever a new laptop comes out or some manufacturer decides to cut corners by wiring some connector to pin 3 instead of pin 2.
At the other end of the spectrum, we have inflexibly hacked UI code where any change to a view requires changes in two other files, often involving adding in mappings that could be implicitly determined or even removed entirely with a better approach to templating.
The phenotypical results of the underlying architectural problems are rife. My file-system suddenly became read-only yesterday. I hadn't done anything - I'd just walked across the room to talk to someone for two minutes. Sure, it's alpha, but seriously... how could this happen? Another time, during a UDS session while listening to something in my headphones, not realising until somebody told me that, embarrassingly, the sound was also coming from my laptop speakers.
Craig Larman says this: "We do not build software. The bricks are laid when we hit compile. We are designers." We design the architecture. We design the interfaces. We invent ways to model reality in code.
If you write code by laying bricks - by placing one switch condition after another, you are not programming, you are doing what the computer should be doing. If you copy-and-paste, you are doing what the computer should be doing.
Object-oriented code can be understood as a way of creating structures that allow the computer to reuse code across all the different places it needs to, allowing you to edit the code only ever in one place. If any change to an application's behaviour requires the same code to be edited in two places, then your code design is wrong.
Dynamic languages vastly simplify the process of constructing concise code. In verbose languages like Java, C# or ActionScript 3, the programmer's intent is buried beneath layers of boilerplate code, braces, nestings and mappings (usually created automatically by the IDE these days). Python, Ruby, or even Processing, allow us to strip away all this noise and crystalise our intentions.
Michael Forrest's Three Rules of Programming
- Always start by defining your interface. Never start with the implementation. Your interfaces and APIs should alway model your problem domain, NEVER the way the computer works
- Name things correctly. Never start typing until you have precisely the right method, variable or class name. You only have to type your code in once. You, and others, have to read it thousands of times.
- Annihilate hand-written repetitive code by writing scripts. If you cannot eliminate repetition in your application code or data files, always write a script to generate those files automatically, and never edit those files by hand.
Michael Forrest's Three Rules of Workflow
- Optimise your workflow. Make it so you can hit a single keyboard shortcut after any code change that will show you the results of that change within 4 seconds.
- Don't run automated tests manually. If you're not automatically running your tests, you're going to end up abandoning your tests.
- Version-control everything with a distributed VCS. I don't care if it's git, hg or bzr - if you're not using local version control, you cannot write good code.
Some examples from my own processes
- When I write Java, I only ever generate method names by typing the call first, in context, and then letting Eclipse generate the function definition automatically with a keyboard shortcut. (P1)
- I will stop and walk around for half an hour trying to think of the best name for a class or method, if one doesn't come to mind immediately, even if the implementation is trivial, and my deadline is in an hour. If you don't do it straight away, you won't do it. No broken windows (to quote Larman again) (P2)
- I will always optimise the readability of my xml file before writing the class that consumes it. (P1)
- If the functionality of a method or class mutates over time, I will always use refactoring tools to rename it correctly (P2)
- If there is a naming convention that can be used, I will build this in throughout my process. For a detailed example see this blog post, which I implore you to read: http://michaelforrest-code.blogspot.com/2009/03/naming-conventions-and-asset-management.html (P3)
I await your feedback :)
Subscribe to:
Posts (Atom)