April 20, 2015 4:02 PM

"Disjunctive Inference" and Learning to Program

Over the weekend, I read Hypothetical Reasoning and Failures of Disjunctive Inference, a well-sourced article on the problems people have making disjunctive inferences. It made me think about some of the challenges students have learning to program.

Disjunctive inference is reasoning that requires us to consider hypotheticals. A simple example from the article is "the married problem":

Jack is looking at Ann, but Ann is looking at George. Jack is married, but George is not. Is a married person looking at an unmarried person?
  1. Yes.
  2. No.
  3. Cannot be determined.

The answer is yes, of course, which is obvious if we consider the two possible cases for Ann. Most people, though, stop thinking as soon as they realize that the answer hinges on Ann's status. They don't know her status, so they can't know the answer to the question. Even so, most everyone understands the answer as soon as the reasoning is explained to them.

The reasons behind our difficulties handling disjunctive inferences are complex, including both general difficulties we have with hypotheticals and a cognitive bias sometimes called cognitive miserliness: we seek to apply the minimum amount of effort to solving problems and making decisions. This is a reasonable evolutionary bias in many circumstances, but here it is maladaptive.

The article is fascinating and well worth a full read. It points to a number of studies in cognitive psychology that seek to understand how humans behave in the face if disjunctive inferences, and why. It closes with some thoughts on improving disjunctive reasoning ability, though there are no quick fixes.

As I read the article, it occurred to me that learning to program places our students in a near-constant state of hypothetical reasoning and disjunctive inference. Tracing code that contains an if statement asks them to think alternative paths and alternative outcomes. To understand what is true after the if statement executes is disjunctive inference.

Something similar may be true for a for loop, which executes once each for multiple values of a counter, and a while loop, which runs an indeterminate number of times. These aren't disjunctive inferences, but they do require students to think hypothetically. I wonder if the trouble many of my intro CS students had last semester learning function calls involved failures of hypothetical reasoning as much as it involves difficulties with generalization.

And think about learning to debug a program.... How much of that process involves hypotheticals and even full-on disjunctive inference? If most people have trouble with this sort of reasoning even on simple tasks, imagine how much harder it must be for young people who are learning a programming language for the first time and trying to reason about programs that are much more complex than "the married problem"?

Thinking explicitly about this flaw in human thinking may help us teachers do a better job helping students to learn. In the short term, we can help them by giving more direct prompts for how to reason. Perhaps we can also help them learn to prompt themselves when faced with certain kinds of problems. In the longer term, we can perhaps help them to develop a process for solving problems that mitigates the bias. This is all about forming useful habits of thought.

If nothing else, reading this article will help me be slower to judge my students's work ethic. What looks like laziness is more likely a manifestation of a natural bias to exert the minimum amount of effort to solving problems. We are all cognitive misers to a certain extent, and that serves us well. But not always when we are writing and debugging programs.


Posted by Eugene Wallingford | Permalink | Categories: Computing, Teaching and Learning

April 09, 2015 3:26 PM

Two Factors for Succeeding at Research, or Investing

Think differently, of course. But be humble. These attitudes go hand-in-hand.

To make money in the markets, you have to think independently and be humble. You have to be an independent thinker because you can't make money agreeing with the consensus view, which is already embedded in the price. Yet whenever you're betting against the consensus there's a significant probability you're going to be wrong, so you have to be humble.

This applies equally well to doing research. You can't make substantial progress with the conventional wisdom, because it defines and limits the scope of the solution. So think differently. But when you leave the safety of conventional wisdom, you find yourself working in an immense space of ideas. There is a significant chance that you will be wrong a lot. So be humble.

(The quote is from Learn or Die: Using Science to Build a Leading-Edge Learning Organization by Ray Dalio.)


Posted by Eugene Wallingford | Permalink | Categories: General, Teaching and Learning

April 08, 2015 2:14 PM

Teaching and the Transformation of Self

After reading What Wittgenstein Learned from Teaching Elementary School in The Paris Review, I think that, while Wittgenstein seems to have had some ideas for making elementary education better, I probably wouldn't want to have him as my teacher. This passage, though, really stuck with me:

We all struggle to form a self. Great teaching, Wittgenstein reminds us, involves taking this struggle and engaging in it with others; his whole life was one great such struggle. In working with poor children, he wanted to transform himself, and them.

The experience of teaching grade school for six years seems to have changed Wittgenstein and how he worked. In his later work, he famously moved away from the idea that language could only function by picturing objects in the world. There is no direct evidence that working with children was the impetus for this shift, but "his later work is full of references to teaching and children". In particular, Philosophical Investigations begins its investigation of "the essence of language" by discussing how children learn language.

And Wittgenstein is sometimes explicit about the connection; he once said that in considering the meaning of a word, it's helpful to ask, "How would one set about teaching a child to use this word?"

We all know that teaching can change the student, but foremost it changes the teacher. Wittgenstein seems to have understood that this is a part of the universal task of forming one's self.


Posted by Eugene Wallingford | Permalink | Categories: Teaching and Learning

March 30, 2015 3:33 PM

Reminiscing on the Effects of Photoshop

Thomas Knoll, one of the creators of Adobe Photoshop, reminisces on the insight that gave rise to the program. His brother, John, worked on analog image composition at Industrial Light and Magic, where they had just begun to experiment with digital processing.

[ILM] had a scanner that could scan in frames from a movie, digitally process them, and then write the images out to film again.

My brother saw that and had a revelation. He said, "If we convert the movie footage into numbers, and we can convert the numbers back into movie footage, then once it's in the numerical form we could do anything to it. We'd have complete power."

I bought my first copy of Photoshop in the summer of 1992, as part of my start-up package for new faculty. In addition to the hardware and software I needed to do my knowledge-based systems research, we also outfitted the lab with a number of other tools, including Aldus Persuasion, a LaCie digital scanner, OmniPage Pro software for OCR, Adobe Premiere, and Adobe Photoshop. I felt like I could do anything I wanted with text, images, and video. It was a great power.

In truth, I barely scratched the surface of what was possible. Others took Photoshop and went places that even Adobe didn't expect them to go. The Knoll brothers sensed what was possible, but it must have been quite something to watch professionals and amateurs alike use the program to reinvent our relationship with images. Here is Thomas Knoll again:

Photoshop has so many features that make it extremely versatile, and there are artists in the world who do things with it that are incredible. I suppose that's the nature of writing a versatile tool with some low-level features that you can combine with anything and everything else.

Digital representation opens new doors for manipulation. When you give users control at both the highest levels and the lowest, who knows what they will do. Stand back and wait.


Posted by Eugene Wallingford | Permalink | Categories: Computing

March 24, 2015 3:40 PM

Some Thoughts on How to Teach Programming Better

In How Stephen King Teaches Writing, Jessica Lahey asks Stephen King why we should teach grammar:

Lahey: You write, "One either absorbs the grammatical principles of one's native language in conversation and in reading or one does not." If this is true, why teach grammar in school at all? Why bother to name the parts?

King: When we name the parts, we take away the mystery and turn writing into a problem that can be solved. I used to tell them that if you could put together a model car or assemble a piece of furniture from directions, you could write a sentence. Reading is the key, though. A kid who grows up hearing "It don't matter to me" can only learn doesn't if he/she reads it over and over again.

There are at least three nice ideas in King's answer.

  • It is helpful to beginners when we can turn writing into a problem that can be solved. Making concrete things out of ideas in our head is hard. When we giving students tools and techniques that help them to create basic sentences, paragraphs, and stories, we make the process of creating a bit more concrete and a bit less scary.

  • A first step in this direction is to give names to the things and ideas students need to think about when writing. We don't want students to memorize the names for their own sake; that's a step in the wrong direction. We simply need to have words for talking about the things we need to talk about -- and think about.

  • Reading is, as the old slogan tells us, fundamental. It helps to build knowledge of vocabulary, grammar, usage, and style in a way that the brain absorbs naturally. It creates habits of thought that are hard to undo later.

All of these are true of teaching programmers, too, in their own way.

  • We need ways to demystify the process and give students concrete steps they can take when they encounter a new problem. The design recipe used in the How to Design Programs approach is a great example. Naming recipes and their steps makes them a part of the vocabulary teachers and students can use to make programming a repeatable, reliable process.

  • I've often had great success by giving names to design and implementation patterns, and having those patterns become part of the vocabulary we use to discuss problems and solutions. I have a category for posts about patterns, and a fair number of those relate to teaching beginners. I wish there were more.

  • Finally, while it may not be practical to have students read a hundred programs before writing their first, we cannot underestimate the importance of students reading code in parallel with learning to write code. Reading lots of good examples is a great way for students to absorb ideas about how to write their own code. It also gives them the raw material they need to ask questions. I've long thought that Clancy's and Linn's work on case studies of programming deserves more attention.

Finding ways to integrate design recipes, patterns, and case studies is an area I'd like to explore more in my own teaching.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development, Teaching and Learning

March 13, 2015 3:07 PM

Two Forms of Irrelevance

When companies become irrelevant to consumers.
From The Power of Marginal, by Paul Graham:

The big media companies shouldn't worry that people will post their copyrighted material on YouTube. They should worry that people will post their own stuff on YouTube, and audiences will watch that instead.

You mean Grey's Anatomy is still on the air? (Or, as today's teenagers say, "Grey's what?")

When people become irrelevant to intelligent machines.
From Outing A.I.: Beyond the Turing Test, by Benjamin Bratton:

I argue that we should abandon the conceit that a "true" Artificial Intelligence must care deeply about humanity -- us specifically -- as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.

Our new computer overlords indeed. This calls for a different sort of preparation than studying lists of presidents and state capitals.


Posted by Eugene Wallingford | Permalink | Categories: Computing, General

March 11, 2015 4:15 PM

If Design is Important, Do It All The Time

And you don't have to be in software. In Jonathan Ive and the Future of Apple, Ian Parker describes how the process of developing products at Apple has changed during Ive's tenure.

... design had been "a vertical stripe in the chain of events" in a product's delivery; at Apple, it became "a long horizontal stripe, where design is part of every conversation." This cleared a path for other designers.

By the time the iPhone launched, Ive had become "the hub of the wheel".

The vertical stripe/horizontal stripe image brought to mind Kent Beck's reimagining of the software development cycle in XP. I was thinking the image in my head came from Extreme Programming Explained, but the closest thing to my memory I can find is in his IEEE Computer article, Embracing Change with Extreme Programming:

if it's important, do it all the time

My mental image has time on the x-axis, though, which meshes better with the vertical/horizontal metaphor of Robert Brunner, the designer quoted in the passage above.

design as a thin vertical stripe versus design as a long horizontal stripe

If analysis is important, do it all the time. If design is important, do it all the time. If implementation is important, do it all the time. If testing is important, do it all the time.

Ive and his team have shown that there is value in making design an ongoing part of the process for developing hardware products, too, where "design is part of every conversation". This kind of thinking is not just for software any more.


Posted by Eugene Wallingford | Permalink | Categories: Patterns, Software Development

March 10, 2015 4:45 PM

Learning to Program is a Loser's Game

After a long break from playing chess, I recently played a few games at the local club. Playing a couple of games twice in the last two weeks has reminded me that I am very rusty. I've only made two horrible blunders in four games, but I have made many small mistakes, the kind of errors that accumulate over time and make a position hard to defend, even untenable. Having played better in years past, these inaccuracies are irksome.

Still, I managed to win all four games. As I've watched games at the club, I've noticed that most games are won by the player who makes the second-to-last blunder. Most of the players are novices, and they trade mistakes: one player leaves his queen en prise; later, his opponent launches an underprepared attack that loses a rook; then the first player trades pieces and leaves himself with a terrible pawn structure -- and so on, the players trading weak or bad moves until the position is lost for one of them.

My secret thus far has been one part luck, one part simple strategy: winning by not losing.

This experience reminded me of a paper called The Loser's Game, which in 1975 suggested that it was no longer possible for a fund manager to beat market averages over time because most of the information needed to do well was available to everyone. To outperform the market average, a fund manager has to profit from mistakes made by other managers, sufficiently often and by a sufficient margin to sustain a long-term advantage. Charles Ellis, the author, contrasts this with the bull markets of the 1960s. Then, managers made profits based on the specific winning investments they made; in the future, though, the best a manager could hope for was not to make the mistakes that other investors would profit from. Fund management had transformed from being a Winner's Game to a Loser's Game.

the cover of Extraordinary Tennis for the Ordinary Tennis Player

Ellis drew his inspiration from another world, too. Simon Ramo had pointed out the differences between a Winner's Game and a Loser's Game in Extraordinary Tennis for the Ordinary Tennis Player. Professional tennis players, Ramo said, win based on the positive actions they take: unreturnable shots down the baseline, passing shots out of the reach of a player at the net, service aces, and so on. We duffers try to emulate our heroes and fail... We hit our deep shots just beyond the baseline, our passing shots just wide of the sideline, and our killer serves into the net. It turns out that mediocre players win based on the errors they don't make. They keep the ball in play, and eventually their opponents make a mistake and lose the point.

Ramo saw that tennis pros are playing a Winner's Game, and average players are playing a Loser's Game. These are fundamentally different games, which reward different mindsets and different strategies. Ellis saw the same thing in the investing world, but as part of a structural shift: what had once been a Winner's Game was now a Loser's Game, to the consternation of fund managers whose mindset is finding the stocks that will earn them big returns. The safer play now, Ellis says, is to minimize mistakes. (This is good news for us amateurs investors!)

This is the same phenomenon I've been seeing at the chess club recently. The novices there are still playing a Loser's Game, where the greatest reward comes to those who make the fewest and smallest mistakes. That's not very exciting, especially for someone who fancies herself to be Adolf Anderssen or Mikhail Tal in search of an immortal game. The best way to win is to stay alive, making moves that are as sound as possible, and wait for the swashbuckler across the board from you to lose the game.

What does this have to do with learning to program? I think that, in many respects, learning to program is a Loser's Game. Even a seemingly beginner-friendly programming language such as Python has an exacting syntax compared to what beginners are used to. The semantics seem foreign, even opaque. It is easy to make a small mistake that chokes the compiler, which then spews an error message that overwhelms the new programmer. The student struggles to fix the error, only to find another error waiting somewhere else in the code. Or he introduces a new error while eliminating the old one, which makes even debugging seem scary. Over time, this can dishearten even the heartiest beginner.

What is the best way to succeed? As in all Loser's Games, the key is to make fewer mistakes: follow examples closely, pay careful attention to syntactic details, and otherwise not stray too far from what you are reading about and using in class. Another path to success is to make the mistakes smaller and less intimidating: take small steps, test the code frequently, and grow solutions rather than write them all at once. It is no accident that the latter sounds like XP and other agile methods; they help to guard us from the Loser's Game and enable us to make better moves.

Just as playing the Loser's Game in tennis or investing calls for a different mindset, so, too does learning to program. Some beginners seem to grok programming quickly and move on to designing and coding brilliantly, but most of us have to settle in for a period of discipline and growth. It may not be exciting to follow examples closely when we want to forge ahead quickly to big ideas, but the alternative is to take big shots and let the compiler win all the battles.

Unlike tennis and Ellis's view of stock investing, programming offers us hope: Nearly all of us can make the transition from the Loser's Game to the Winner's Game. We are not destined to forever play it safe. With practice and time, we can develop the discipline and skills necessary to making bold, winning moves. We just have to be patient and put time and energy into the process of becoming less mistake-prone. By adopting the mindset needed to succeed in a Loser's Game, we can eventually play the Winner's Game.

I'm not too sure about the phrases "Loser's Game" and "Winner's Game", but I think that this analogy can help novice programmers. I'm thinking of ways that I can use it to help my students survive until they can succeed.


Posted by Eugene Wallingford | Permalink | Categories: General, Software Development, Teaching and Learning

March 06, 2015 11:29 AM

A Brief Return to the 18th Century

Petrov's Defense

I recently discovered that the students at my university have a chess club, so I stopped over yesterday to play a couple of games. In the first, my opponent played Philidor's Defense. In the second, I played Petrov's Defense. For a moment, I felt as if we had drifted in time to a Parisian cafe, circa 1770.

Then I looked up and saw a bank of TV screens surrounded by students who were drinking lattes and using cell phones to scroll through photos. I was back from my reverie.


Posted by Eugene Wallingford | Permalink | Categories: Personal

March 04, 2015 3:28 PM

Code as a Form of Expression, Even Spreadsheets

Even formulas in spreadsheets, even back in the early 1980s:

Spreadsheet models have become a form of expression, and the very act of creating them seem to yield a pleasure unrelated to their utility. Unusual models are duplicated and passed around; these templates are sometimes used by other modelers and sometimes only admired for their elegance.

People love to make and share things. Computation has given us another medium in which to work, and the things people make with it are often very cool.

The above passage comes from Stephen Levy's A Spreadsheet Way of Knowledge, which appeared originally in Harper's magazine in November 1984. He re-published it on Medium this week in belated honor of Spreadsheet Day last October 17, which was the 35th anniversary of VisiCalc, "the Apple II program that started it all". It's a great read, both as history and as a look at how new technologies create unexpected benefits and dangers.


Posted by Eugene Wallingford | Permalink | Categories: Computing