Friday, 5 March 2010

Making the computer work for YOU

Yukihiro Matsumoto put it better than I ever could: 
Often people, especially computer engineers, focus on the machines. They think, "By doing this, the machine will run faster. By doing this, the machine will run more effectively. By doing this, the machine will something something something." They are focusing on machines. But in fact we need to focus on humans, on how humans care about doing programming or operating the application of the machines. We are the masters. They are the slaves.
Ubuntu developers take note.


My experiences with the Ubuntu codebase so far have been that far too much emphasis is being placed on how the machine works, with little emphasis on modelling the real world or coming up with APIs and toolkits that work nicely from a human perspective.


We have driver code with giant switch statements to detect pin configurations of different chipsets on different motherboards, with new lines being hacked in whenever a new laptop comes out or some manufacturer decides to cut corners by wiring some connector to pin 3 instead of pin 2.


At the other end of the spectrum, we have inflexibly hacked UI code where any change to a view requires changes in two other files, often involving adding in mappings that could be implicitly determined or even removed entirely with a better approach to templating.


The phenotypical results of the underlying architectural problems are rife. My file-system suddenly became read-only yesterday. I hadn't done anything - I'd just walked across the room to talk to someone for two minutes. Sure, it's alpha, but seriously... how could this happen? Another time, during a UDS session while listening to something in my headphones, not realising until somebody told me that, embarrassingly, the sound was also coming from my laptop speakers. 


Craig Larman says this: "We do not build software. The bricks are laid when we hit compile. We are designers." We design the architecture. We design the interfaces. We invent ways to model reality in code.


If you write code by laying bricks - by placing one switch condition after another, you are not programming, you are doing what the computer should be doing. If you copy-and-paste, you are doing what the computer should be doing. 


Object-oriented code can be understood as a way of creating structures that allow the computer to reuse code across all the different places it needs to, allowing you to edit the code only ever in one place. If any change to an application's behaviour requires the same code to be edited in two places, then your code design is wrong. 


Dynamic languages vastly simplify the process of constructing concise code. In verbose languages like Java, C# or ActionScript 3, the programmer's intent is buried beneath layers of boilerplate code, braces, nestings and mappings (usually created automatically by the IDE these days). Python, Ruby, or even Processing, allow us to strip away all this noise and crystalise our intentions.


Michael Forrest's Three Rules of Programming

  1. Always start by defining your interface.  Never start with the implementation. Your interfaces and APIs should alway model your problem domain, NEVER the way the computer works
  2. Name things correctly. Never start typing until you have precisely the right method, variable or class name. You only have to type your code in once. You, and others, have to read it thousands of times.
  3. Annihilate hand-written repetitive code by writing scripts. If you cannot eliminate repetition in your application code or data files, always write a script to generate those files automatically, and never edit those files by hand.

Michael Forrest's Three Rules of Workflow

  1. Optimise your workflow. Make it so you can hit a single keyboard shortcut after any code change that will show you the results of that change within 4 seconds.
  2. Don't run automated tests manually. If you're not automatically running your tests, you're going to end up abandoning your tests.
  3. Version-control everything with a distributed VCS. I don't care if it's git, hg or bzr - if you're not using local version control, you cannot write good code.
Some examples from my own processes
  • When I write Java, I only ever generate method names by typing the call first, in context, and then letting Eclipse generate the function definition automatically with a keyboard shortcut. (P1)
  • I will stop and walk around for half an hour trying to think of the best name for a class or method, if one doesn't come to mind immediately,  even if the implementation is trivial, and my deadline is in an hour. If you don't do it straight away, you won't do it. No broken windows (to quote Larman again) (P2)
  • I will always optimise the readability of my xml file before writing the class that consumes it. (P1)
  • If the functionality of a method or class mutates over time, I will always use refactoring tools to rename it correctly (P2)
  • If there is a naming convention that can be used, I will build this in throughout my process. For a detailed example see this blog post, which I implore you to read: http://michaelforrest-code.blogspot.com/2009/03/naming-conventions-and-asset-management.html (P3) 
I await your feedback :)


6 comments:

  1. Yes, yes, yes, and yes. But what's the root cause? We can write lists of suggestions and good practices and patch all of the symptoms, but Why do Ubuntu's technical stakeholders tend to place the computer's needs before the human's needs?

    PS: If your C# is verbose, you don't know C# ;)

    ReplyDelete
  2. None of my code is verbose :) I was mainly referring to the braces.

    ReplyDelete
  3. Some sensible stuff here. As a developer who leans heavily towards the front end, I can only agree that a project should be defined by its end usability rather than implementation factors.

    Sadly, every single word of what you wrote was rendered completely and utterly meaningless by the use of hyperbole ("if you're not using local version control, you cannot write good code") ;-)

    ReplyDelete
  4. I find that it could be very well worth it to take inspiration from nature and biology. Nature can be very elegant and very technical at the same time. I have to link to this clip, albeit in a different domain, it indicates nature's principal power: http://www.youtube.com/watch?v=L5JHMpLIqO4

    Why not have a desktop interface that organizes information analogous to the nervous system (see also: ant colony optimization). Like the "web" we already have. But take the web more literally and really show it, as some kind of map. For example: Why not maintain for every window a list of the other windows to which the user switches regularily. And then link to these applications just at the border of the window. A kind of hyperlink if you will. Later on, the user could be shown a tube style map. And possibly, the user could choose different switching profiles, indicated just besides the hyperlinks, that are tailored to different use cases where you switch applications with very different patterns. These patterns cannot be put into a program all at once, because they are so different from each other and possibly not even predictable. They won't fit into any one formula. Users would have to create them in some way. And at some day, you could even associate applications not yet started. So if I decide to run my IDE, the window manager/interface could automatically associate a specific terminal profile to be launched and indicate that with a nice icon. So in the long run, personal preferences and workflows would become more ubiquitous because they are not anymore a part of a specific program, but an integral element of the general map, maybe even having their own widgets. Maybe I'll program something like this one day...

    ReplyDelete
  5. Hi Mike, are we talking here about creating a user-centered development movement? :)

    ReplyDelete
  6. find out the well and good information about computers.
    Thanks

    computer

    'very interesting post'
    http://ablogforallcomputerusers.blogspot.com/

    ReplyDelete