An Interview with Paul Rosenbloom

Published: January 16, 2013
Category: News

By Julia Kim
Reason #27 that one should mingle at office social events: you learn that your colleague is writing a book to totally change how people think about computing.
A few years ago, my colleague Paul Rosenbloom mentioned off-handedly over our glasses of wine that he was revising his new book. I can’t remember whether he or I proposed having me read a draft, but we both liked the idea. Since I had done some studies in the history of computing, his exploration into computing’s essence and its relationship to other fields sounded intriguing. Plus it’s interesting to see how someone within a field views the field as a whole. After reading the draft, I found it not just intriguing but exciting.
Paul pointed out that various traditional concepts of computing (it is hardware, it is software, it is math, etc.) are neither precise nor productive. As a computer scientist unafraid of meta-thinking, he proposed a computational model for computing (yup) that explained the field and its relationship to the world. He also controversially suggested that computing should be considered on par with the physical, life, and social sciences. Oh and mathematics falls under computing.
The book, On Computing: The Fourth Great Scientific Domain, is out now from MIT Press. As Paul himself notes in his introduction to the book, you don’t have to believe everything that he says. But he hopes that it will still inspire you to think new thoughts about how computing is understood, taught, and done.
It certainly did that for me, and Paul was nice enough to answer my questions about the book and related topics.
You say in the introduction that the ideas in this book came from noticing how computing touches so many things in the world, from everyday life to research. Can you talk a little about this?
This all grew out of a decade focused on new directions activities at USC’s Information Sciences Institute. The set of topics we were working on at first seemed like nothing more than a random jumble, although it was clear that they were all inherently interdisciplinary if you looked closely at them. The topics included technology and the arts; combining computing technology and entertainment expertise for education and training; teams of robots, agents and people; sensor networks and their extension to physically coupled webs (combining sensors, effectors and computing); the use of grid technology for building virtual organizations; computational science; biomedical informatics; modeling and simulation of entire environments, such as virtual cities and comprehensive earth/quake models; and automated construction. Each of these involves computing and at least one other domain, and sometimes multiple, in varying relationships.
Going from observing a phenomenon to developing a new framework seems like a non-obvious thing (at least to me!). Where did you start? How did the architecture evolve? Did you know it was going to have all the dimensions it ended up having?
As mentioned above, I started with the topics we were working on at ISI, almost all of which were interdisciplinary in some way. In trying to move beyond just which domains were involved, I asked myself how these domains related to each other in yielding the various topics, and therefore whether there was some systematic space within which it could all be fit and organized. I recall playing around on paper for quite a while at the beginning, but it was more than a decade ago, so I don’t remember the details. Out of this originally came two types of structures: (1) the early relational architecture (which had an embedding relationship in addition to implementation and interaction); and (2) what is now called in the book the hierarchical systems architecture, but which back then was considered an orthogonal way of organizing topics within computing. Over the years, the relational hierarchy has been refined repeatedly, and the other hierarchy has finally been assimilated to it.
As a (lapsed) historian of science, I like that your architecture challenges the traditional concept of science vs. engineering. You propose that “shaping” and “understanding” contributes to knowledge creation and that there is a healthy feedback loop between the two processes. Can you talk more about this from your own experiences and what you’ve seen in computing and other fields?
My core research is actually in an area called cognitive architecture, where I try to understand the fixed structure of a/the mind, whether natural or artificial. With humans, there is already a system in existence with which we can experiment. With AI, we need to invent/build the systems before we can experiment with them. With humans, the trickiest parts are coming up with clever experiments and interpretations that help us understand this existing black box. With AI, experiments and interpretations are usually rather straightforward, with the tricky part being inventing the things that are worth experimenting on (and much of what is learned comes from the process of understanding how to build something that will work, rather than just formal experimentation on the ultimate working system).
The experiments/interpretations on both the natural and artificial systems help to guide the process of deciding what it is worthwhile to build. What is being built may have practical consequences in the real world, as a usable system, but its construction itself is an experiment in what is possible, and a means for understanding both what is possible and how it is possible. I like to quote Feynman here: “What I cannot create, I do not understand.” Anyway, for me this all means that I follow an integrated pathway that involves both understanding and shaping mind, with the natural and artificial aspects typically intertwined.
In computer science in general there may not always be a natural aspect from which to learn, but the rest of it is always there. We learn from and by building, as well as from theoretical analysis and experimentation with what has been built. At the same time, what we learn helps us build better things that are more likely to have value in the real world.
The other sciences have not always been able to leverage this form of deep integration, as historically they haven’t always been able to build instances of what they want to study, but this should be changing more and more as we understand how to create/change the physical, life and social world at their most primitive levels.
So the lines between natural and artificial are being blurred – and not just in computing but other domains (e.g., genetically-modified foods or super-bugs responding to our vaccines). How do you think and feel about these issues?
Since people are in fact part of nature, there really is no fundamental difference between natural and artificial phenomena, and I see no good reasons for science being limited to one versus the other. We have a compelling need to understand all of the phenomena around and in us, no matter what their origins. This need not imply that origins are unimportant in determining how something is studied or what its impact might be, but even here just determining origins may be increasingly difficult as our ability continually increases for modifying nature at its most fundamental levels. There are always risks in the creation of the new, whether it is the evolution of new organisms or the development of human inventions, and the more powerful the creation the greater the risk. Improving our understanding is really the only viable approach to dealing with risk over the long term.
A potentially more controversial idea in your book is that computing is one of the major sciences (along with the physical, life, and social sciences). When did this idea become a part of your thinking and this book?
This came up fairly late in the overall process. I first developed the relational architecture, and as I was writing it up for the book, realized that it assumed a form of symmetry between computing and the other three domains to which it was being related (the physical, life and social sciences). This raised the question as to whether this symmetry was really appropriate if computing wasn’t coequal with the others, and forced me to ask whether or not it was. The relational architecture could still be useful even if computing weren’t a coequal, but it would be more compelling if it were. So at this point I had to understand what it was that made the physical, life and social sciences what they were, and whether a case could be made that computing was a fourth such thing. Before this excursion, I had been thinking of this work as primarily about the nature and structure of computing, but not necessarily about establishing where computing fit in the pantheon of science and engineering. What thus arose as a subquestion may end up of more lasting importance than the original question/idea.
How are faculty meetings at USC now that you’ve written this book? Are the physicists letting you sit at their table? I imagine the mathematicians may not be taking kindly to being subsumed into computing. For that matter, do the computer science people agree with what you’ve written?
I haven’t yet had much interaction with faculty in other departments at USC about the book. I have though given a talk based on the material at several universities, including USC, primarily to computer scientists but there has usually been a smattering of folks from other disciplines in attendance as well. I get a range of responses, from highly enthusiastic, to appalled – particularly about the claim concerning mathematics – to unclear whether this is anything more than simply an academic exercise. None of the responses to date have shaken the basic claims, but of course the ones I’ve enjoyed the most are those where it is clear a flash bulb has gone off in someone’s head.
You talk in the book about a bit of a crisis you had prior to taking on your new job at ISI. Can you talk about this?
At that point I had been working for two decades towards the ideal, or grand challenge, of understanding and building a cognitive architecture, with much of the latter years focused on a large-scale application of the architecture. But I no longer could see a path forward towards significant enhancements to what we had in hand, and couldn’t generate enthusiasm for what did seem possible. So something dramatically different, where I could learn new things and hopefully have a significant impact of a different kind, seemed in order. My job heading up New Directions at ISI did turn out to be this. When that wrapped up, I was able to return to working on cognitive architecture with new ideas and enthusiasm, and with what appears to be a very promising path forward.
What do you take from your relational architecture work for your cognitive architecture work? What do you think other computing folks can take away from the relational architecture?
One big thing was a meta-lesson about my own research style. I realized that, in general, research was unsatisfying to me unless I was making progress on understanding and exploiting deep underlying commonalities. This is what we were able to do in the early days working on the Soar cognitive architecture, and what I was able to with the relational architecture, although these two pieces were in very different areas. I now believe that the inability to do this any longer along the path I had been following was a major factor in the crisis mentioned above. Fortunately, I can see how to do it again with my new Sigma cognitive architecture.
Another lesson was the reaffirmation of the importance of the interdisciplinarity that had always been part of my work on cognitive architecture. Combining work on natural and artificial intelligence is frequently neither understood nor appreciated within artificial intelligence even though AI is inherently multi-domain according to the relational analysis. The relational analysis has bolstered my confidence in what I always believed, and inspired me to be more evangelical than apologetic about it.
I hope more generally that the relational architecture will help people in computing think about the ways that their work either is or should be across domain; and to understand how what they do relates to what else is going on across the domain of computing. Independent of their own area, my hope is that it also helps them understand better the diverse and important scientific domain in which they work.
What’s next for you?
A big next step based on the book is to (co-) design and teach a new kind of Introduction to Computing course for incoming USC freshman who are majoring in computer science. The first half of the course will focus on introducing the key concepts in computing, and the second half on providing the broad integrated perspective on computing described in the book. The hope is to provide them with the rich background and context that enables them to make sense of all of the more specific technical content they will see in the remainder of the program. If this goes well, an introductory course for the broader population of undergraduates is also a possibility.
To wrap up, let’s go back to the beginning for you. What was your first hands-on experience with a computer? And do you remember an early computer program that you wrote or saw that excited you about computers?
My first hands on experience with a computer was during the summer of 1971, when I got the chance to learn a bit of the BASIC language using a very slow teletype connected to a shared central computer.  What has both fascinated and excited me about computers ever sense is the ability to turn concepts into behavior, along with the procedural intricacy that this involves. A big part of this for me personally has been the ability to turn ideas about how minds work into systems that behave according to these ideas.
Paul S. Rosenbloom is a professor of computer science at the USC Viterbi School of Engineering and works at the USC Institute for Creative Technologies on a new cognitive/virtual-human architecture – Sigma (Σ) – based on graphical models. For more information, click here.
Julia Kim is a project director at the University of Southern California’s Institute for Creative Technologies. Julia studied the history of science at Harvard University (B.A., 1998; M.A., 2004), with a focus on the history of information science and technology. For more information, click here.

For more information about Paul Rosenbloom’s book, read the USC news article.
Buy On Computing: The Fourth Great Scientific Domain on Amazon.