Sergey Mikhanov  

Objective: Get Real (March 23, 2014)

This post is a reply to Jason Brennan’s post “Objective: Next.” Jason reiterates that programming environment for the iOS is very limited in both language and standard library capabilities, outlines some concrete issues he sees as problematic and calls to liberate programming from textual style. My opinion on that is that this is (a) impossible in the near future, and (b) harmful.

There was a lot said and written about how iOS, Objective-C and Xcode programming ecosystem is, basically, inadequate. The language still maintains most of the basics inherited from early eighties despite all later advances in programming languages design, thus causing frustration of programmers familiar with more friendly and open programming environments. But it’s hard to argue with popularity. Mobile is very attractive for even the best hackers (and businesses), and all those very smart people try to find their ways around the limitations of the technology.

In itself, this is not so much of a problem. Much more problematic is the situation when smart people start extrapolating day-to-day problems with their very narrow, closed and restrictive environment to a programming profession as a whole. Bret Victor quoted in the post (absolutely out of context, in my opinion) probably due to his experience and breadth of worldview manages to avoid this mistake. A common vein in his work has nothing to do with programming per se, but addresses abstractions of any kind: those of mathematics, electrical engineering, animation, etc. His recurring point is that computers can help us expressing our thoughts in more coherent and effective manner, thus being better tools for thought.

One may argue here that developing software is just another type of intellectual activity and therefore can benefit from using better abstractions in the process of programming. I mostly agree with this point, but it’s clear that intellectual leverage offered by any abstractions of this sort would only be valid in a very limited context. To underline this limitation, I’ll use this example. Some of my UI designer friends see very little special in Bret’s presentations and videos. For them, it boils down to the “immediacy of response”, a concept know to everyone in the UI world. It postulates that the result of each action should be immediately obvious for the user, even if the outcome of the requested action is delayed. This is why LED lights in control buttons of your washing machine turn on immediately after you press them, or a “Logging on” spinner is shown after you type your username and password on a web site.

When taken in a simple context of washing machines, this approach makes a lot of sense. It can even be expanded to some basics of programming. There are tools that show you contextual information about the code and the project: see touch operations with code in Codea, for example, or the way left-hand side ruler works in some tools from JetBrains. The daily minutae of iOS development, like updating a font size or not seeing the color when it’s typed in hexadecimal all fall into this category (funny enough, the developer-designer interaction was quoted as a problematic point, but team work is one of the reasons why programming languages are textual.)

It’s scaling those abstractions to the world outside of iOS that will break them. Unifying the process of development of a website, an OS kernel, a real-time software that handles 911 calls, and an iPhone app is a task bordering on impossible.

As I said above, you can’t argue with popularity. It’s normal to ask for better tooling, like Jason does, despite talking about avoiding instrumental thinking, but given the sheer number of young programmers starting their careers as developers on the iOS platform, it’s very dangerous to redefine programming on iOS in such a narrow sense. We don’t want all those young programmers becoming incapable of development anywhere outside of Xcode, or scared of moving past Xcode.

Because it’s when you see how wide the whole spectrum of the programming profession is, you can get truly scared.

Logging is the new commenting (July 3, 2013)

Back in 1999, when I was in the university, I was studying the C programming language — just like every other CS major in the world. In our little programs, there were header files and implementation files and everyone in my class, myself included, learned quite early that the number of files in your project will grow twice as fast as you add what our mentor was calling “a new compilation unit.”

Someone in the class — unfortunately, not me — pointed at this obvious design “flaw.” Really, why do you have to duplicate your effort? All the information compiler needs about your program is in the implementation file already. I have only learned later that this is the point where most of the programmers start their affairs with more concise programming languages. But the lesson of that day was clear — duplication is bad.

One semester later everyone in class got their copy of 3rd edition of Bjarne Stroustrup’s “C++ programming language”. One quote I learned almost by heart. It was — this time, fortunately — not related to C++ itself (it’s from the chapter 6.4 Comments and Indentation):

Comments can be misused in ways that seriously affect the readability of a program. The compiler does not understand the contents of a comment, so it has no way of ensuring that a comment:

  1. is meaningful,
  2. describes the program, and
  3. is up to date.

Most programs contain comments that are incomrehensible, ambiguous, and just plain wrong. […] If something can be stated in the language itself, it should be, and not just mentioned in a comment.

As much as I dislike C++, the writing style of its author is excellent.

For the next ten years or so of my programming career I was barely leaving any comments in the code I wrote. Of course I did understand that comments could potentially be useful in the future for anyone reading the code (myself included) but writing code seemed easy and no explaination was needed. Throughout the same years I discovered the usefulness of logging; it was so helpful that I stopped using debugger completely. Here’s an example snippet from a deliberately incomplete recent program. Note the amount of logging statements:

- (NSArray *)unify:(GAClause *)query with:(NSDictionary *)base dynamicContext:(NSArray *)dynamicContext {
    DLog(@"Unifying query %@, dynamic context is %@", query, dynamicContext);
    
    NSMutableArray *result = [NSMutableArray array];

    GATuple *selfUnified = [query selfUnify:nil dynamicContext:dynamicContext];

    if (selfUnified) {
        DLog(@"Query self-unified, got tuple: %@ ", selfUnified);

        NSMutableArray *tuples = [NSMutableArray array];
        for (GATuple *t in dynamicContext) {
            [t merge:selfUnified];
            
            [tuples addObject:t];
        }
        
        DLog(@"Merged self-unification result with dynamic context: %@", tuples);
        
        return tuples;
    } else {
        for (GAClause *left in [base allKeys]) {
            GAClause *right = [base objectForKey:left];

            GATuple *tuple = [self unifySingle:query with:left dynamicContext:result];

            DLog(@"Bindings after unifying: %@ (clauses were %@ and %@)", tuple, query, left);

            if (!tuple)
                continue;

            [result addObject:tuple];

            DLog(@"Accumulated result: %@", result);

            if (![right isEqual:[GAConstant kTRUE]]) {

Objective-C methods [selfUnify:dynamicContext:] and [unifySingle:with:dynamicContext:] generate even more logging inside. Over a course of years, once I have produced and worked with some significant number of programs that generate meaningful logs, I noticed that reading logging statements in the code help greatly in understanding what the program is about — even without running it. Really, try mentally replacing all logging above with comments:

- (NSArray *)unify:(GAClause *)query with:(NSDictionary *)base dynamicContext:(NSArray *)dynamicContext {
    // Unifying query with dynamic context
    
    NSMutableArray *result = [NSMutableArray array];

    GATuple *selfUnified = [query selfUnify:nil dynamicContext:dynamicContext];

    if (selfUnified) {
        // If we reached here, query is self-unifyable

        NSMutableArray *tuples = [NSMutableArray array];
        for (GATuple *t in dynamicContext) {
            [t merge:selfUnified];
            
            [tuples addObject:t];
        }
        
        // Merged self-unification result with dynamic context
        
        return tuples;
    } else {
        for (GAClause *left in [base allKeys]) {
            GAClause *right = [base objectForKey:left];

            GATuple *tuple = [self unifySingle:query with:left dynamicContext:result];

            // Finished unifying, variable bindings now created

            if (!tuple)
                continue;

            [result addObject:tuple];

            // At this point we have partly accumulated result

            if (![right isEqual:[GAConstant kTRUE]]) {

Once you’ve got the logging, the need in having lots of comments in code reduces dramatically. You don’t need to duplicate your effort. You even can get rid of the downsides described by Bjarne Stroustrup, because someone running a program and reading its logs will make sure the output is meaningful, descriptive and current. Logging therefore becomes a better commenting system, helpful both during runtime and when figuring out what a particular piece of code does. Given the flexibility and robustness of modern logging frameworks available in any language, there are very few reasons to use lots of comments. Just log more.

If you could have infinite coding speed, what project would you work on? (October 3, 2011)

I love contemporary art. It exists within its own universe, driven by the principles artists sometimes invent on the fly. It’s an incessant journey towards pure aesthetics while trying to ignore consumer, political, religious and social values, a constant search of meaning. Exciting stuff.

It so happened that I’m a founder and a programmer and gallery visits and books usually limit my exposure to the art world. But every time I encounter some new insight on the artists’ way of doing work I try to make parallels with programming. For example, would a practice similar to one of copying masters’ work — it does exist in art schools — help a young programmer’s development? If you sit and rewrite, say, V8 (if you’re into hardcore C/C++) or Sinatra (if you’re a Ruby ninja) while peeking at the original periodically, would that make you a better programmer?

While it’s not clear how useful this way of using your time is, we can afford another short thought experiment. Here’s a small quote of the very famous art curator, Hans Ulrich Obrist:

Doris Lessing, the Nobel Prize winning author, once told me in a conversation that there are not only the projects which are made impossible by the frames of the contexts we work in, but there are also the projects we just don’t dare to think up. The self-censorship of projects. And there are all the books she hasn’t written because she didn’t dare to write them. So, that is the question that been my umbilical cord, and it’s also the only question that I ask in all of my interviews. What is your unrealized project?

The most important context we, as developers, exist in, is the one scoped by code complexity (that increases as any project grows) and the actual time required typing those lines of code. What would be your project if you could get rid of those limits and have your project done and working as soon as you finish typing int main() {}?

Let’s assume we don’t gain magic powers together with infinite coding speed. I.e. if your problem is not Turing-decidable or does not have a clear solution yet (like the task of high-quality speech recognition, for example) or belongs to NP, we still can’t solve it.

Now that’s a tough one.

I once started writing a Twisted-like protocol framework in Haskell — my pet project some time ago. Six years spent doing telco development where having time to market in the range of months and not years is a huge win, and where requirements of software uptime and reliability are insane made me think that having a platform written in more expressive and rigid language might be useful. While it would be nice having this framework in no time, it could only be proven useful with the real-world constraints in place. Can we develop useful applications on top of this platform quickly? Nobody can say in advance. In other words, even with infinite coding speed, this project might be both a hit or a miss.

My current project, Scalar, mostly consists of UI/UX challenges. We’re trying to invent new way of doing complex calculation on iPad and as soon as we’re able to figure out how to present things on the device and how to allow users to interact with them, coding is pretty trivial. It’s clear that Scalar would greatly benefit from us having the infinite coding speed, but its total development time wouldn’t be reduced to zero. Moreover, its commercial success would be far from guaranteed even in this case.

At this point I started to wonder: could you turn at least any project into a hit by having the infinite coding speed? You can build something marginally better than Square, but you still have to spend time wearing suits to meetings with bankers. You can quickly build something that will combine Lonely Planet with Gowalla and Instagram, but you can’t speed up travel writers taking data out of Quark Xpress files and putting them into your database. Even with systems programming, reducing implementation time to zero does not seem to help. Could you instantly build something better than VMWare and win their market, for example? Not so sure. Or can you profit from implementing a GFS-like file system in no time? It looks like even the wildest dreamers with will hit the wall here.

If you could have infinite coding speed, what would your project be like?

How did we get here (September 26, 2011)

Well, that Feynman quote was on the top for very long — more than half a year, actually. In the meanwhile I left the world of large corporations to have a ride of my own. The project I work on now is called Scalar², it’s an app that aims to disrupt the way people carry out calculations on iPad.

How did all this happen? A year ago, when Esad and I at now defunct Critical Point Software were trying to invent an idea for a new app, we felt like we were running in circles. We have just tasted minor success with our subway maps app Transit and travel apps niche seemed promising. We tried thinking about local area guides, but nobody of us wanted to deal with well-funded too-Viennese Tripwolf. It turned out to be a good idea: despite attempts from big players, no company seems to create the mobile travel guide that would be truly great; what could two folks with day jobs have done?

So in short, we stuck. And we decided to go for the idea that was on the surface: just write a better calculator for iPhone. The one built into the phone was definitely not good enough. There were virtually no competitors, except for Soulver, the old big player in this niche. The app would be data-independent: we learned the hard way with Transit that having an app that strongly depends on the external data you collect makes support a nightmare. So, according to git log, we have started working on 13th of April 2010, and on 4th of June we were ready to release. We named the app Scalar.

We got modest sales, but people seem to love our tiny app. Reviews were very positive despite all the wrinkles, like, for example, the fact that calculations were almost set in stone after you enter them at that time (there was a backspace key, but you couldn’t modify anything in the middle of your calculations). After we decided to split the assets with Esad (won’t go into details here), I took the app with me because I loved the idea and I felt we could be onto something here. The Scalar development stalled for few months.

When I decided to leave my day job, the competitive landscape was still almost clear. Tapbots released Calcbot but it was clearly a well-designed toy, not really an app for calculating. Soulver was now available on iPhone and iPad, firmly sticking to the initial vision of having everything in the app in text form — not the best form for touchscreens. Apple’s Numbers were now available on iPad, but heck, isn’t there a place for something new on this magical device, not just a remake of 30-year old killer app? I already had my sleeves up and was looking for the project to focus on, when I saw this Apple’s commercial (I encourage you to spend 30 seconds of your time and watch it):

See? Nobody works with numbers, even the CEO! So I was about to make something people would love to use instead of the spreadsheets, something that’ll have no tiny cells or won’t force you to type =SUM(B4:B16)/E17 anywhere. So, off you go. Very simplistic iPad version: check. Better copy and paste: check. Redesign, because people love beautiful things: check.

In the middle of this summer I was full steam ahead with development. Convenient edit functionality: check. Built-in analytics for better understanding of users’ behavior: check. Multi-document support: check. Text handling to allow people to leave any annotations next to their numbers: check. I implemented and tried three different UI metaphors to allow selection of some parts of your calculations just to find out that no UI metaphor is needed. Simpler is better: one finger swipe means scroll, two fingers swipe means select. Storing parts of your calculation for referencing them later: check. Pinning them to have them always in sight: check. After all this was done, I moved my budget data from a Google spreadsheet to the working version of Scalar on my iPad and never looked back.

This is where we stand now. Dogfooding is a key turn for a small company with a big vision: from this moment on you only embarrassed of what you’ve built if you can’t use it the way you envisioned. I, for one, only feel proud now.

A quote from Surely You’re Joking, Mr. Feynman! (February 1, 2011)

The real trouble was that no one had ever told these fellows anything. The army had selected them from all over the country for a thing called Special Engineer Detachment — clever boys from high school who had engineering ability. They sent them up to Los Alamos [to work on the nuclear bomb project]. They put them in barracks. And they would tell them nothing.

Then they came to work, and what they had to do was work on IBM machines — punching holes, numbers that they didn’t understand. Nobody told them what it was. The thing was going very slowly. I said that the first thing there has to be is that these technical guys know what we’re doing. Oppenheimer went and talked to the security and got special permission so I could give a nice lecture about what we were doing, and they were all excited: “We’re fighting a war! We see what it is!” They knew what the numbers meant. If the pressure came out higher, that meant there was more energy released, and so on and so on. They knew what they were doing.

Complete transformation! They began to invent ways of doing it better. They improved the scheme. They worked at night. They didn’t need supervising in the night; they didn’t need anything. They understood everything; they invented several of the programs that we used.

So my boys really came through, and all that had to be done was to tell them what it was. As a result, although it took them nine months to do three problems before, we did nine problems in three months, which is nearly ten times as fast.