Book review by Ted Felix
I really wish I had read Object Oriented Software Engineering: A Use Case Driven Approach (OOSE) in 1992 when it came out, and read it again every year after. Then, once Larman's Applying UML and Patterns came out, I should have read that every year instead.
Even today this is a really great book. You might be turned off initially, as I was, by its rather broad coverage of the software lifecycle (industrial processes, testing, etc...), and its rather abstract explanations of how a methodology is designed. However, if you stick to it, you'll find the real gold is in three key areas.
First is the discussion of function/data (function oriented) programming versus object oriented programming (pp 135-141). It's a balanced presentation that starts by showing that object oriented programming isn't really that much better than function/data programming. Then it gives valid arguments why O-O has an advantage over function/data. This is the first time I've encountered such a rational explanation.
Instead of just saying that O-O makes code more resilient to change, Jacobson provides the reader with a "robustness analysis" process where potential future changes shape the organization of the code.
"...the analysis model will not be a reflection of what the problem domain looks like. ... The reason is simply to get a more maintainable structure where changes will be local and thus manageable. We thus do not model reality as it is, as object orientation is often said to do, but we model the reality as we want to see it and to highlight what is important in our application." (pg 195)
Next is the introduction of three types of objects: Entity, Interface (now called Boundary), and Control (pg 130). Along with these three types, advice is provided on when they should be used, and how code should be sliced up into these types of objects. The concept of Control objects is very important to good object oriented design. All the books I read during the early 1990's insisted on modeling the real-world only, excluding Interface and Control objects. This turns out to be quite difficult.
"The control objects model functionality that is not naturally tied to any other object..." "We do not believe that the best (most stable) systems are built by only using objects that correspond to real-life entities, something that many other object-oriented analysis and design techniques claim." "...behavior that we place in control objects will, in other methods, be distributed over several other objects, making it hard to change this behavior." (pg 133)
Finally, Jacobson provides two excellent case studies which give you plenty to think about (Chapters 13 and 14). These case studies make the book still worthwhile today since anything that exercises O-O thinking is worth the time.
If you could read only one book on OO analysis and design, Larman's Applying UML and Patterns is the right book to read. However, if you want to see a little more history and have two more case studies to think about, Jacobson's Object Oriented Sofware Engineering makes a great companion volume. You certainly can't beat the price at around $5 used. Even at the new price of $60, this book is worth it.
Using blocks as a higher level of abstraction struck a chord with me.
Pg 39 "design with building blocks" Building blocks are a design concept that enables the transition from analysis to implementation. Each analysis object translates into a design block which may translate into more than one class in the implementation.
Pg 111 "block design" This is hardware-inspired software design. Similar to component-based design.
Pg 211 "blocks to handle the environment" OS wrappers are another kind of object that is recommended.
The examples show how blocks lead to persistence frameworks and other complex, potentially reusable structures to support the implementation of an analysis concept.
In RUP, what happened to the "blocks" from OOSE? Are they still there? Can an "object" in the analysis model translate into more than one object in the design/implementation models?
Jacobson argues throughout for use cases as the root of software design. He calls it "use case driven design" (pg 129).
Bertrand Meyer in his book Object-Oriented Software Construction (2nd edition) (OOSC-2) argues that use cases are harmful to development (pg 738). Given that OOSC-2 is not a book about large-scale software architecture, one must think carefully about Meyer's opinion. He gives three reasons why use cases are bad. Use cases...
Emphasize ordering. Throughout OOSC-2, Meyer stresses that it is important to forget about the sequencing of operations in a program since that will adversely affect the discovery of sensible operations for a class. One should think about the classes themselves and forget about how they will be used. But, if you don't consider how a class will be used, there's no sense in creating the class. This thinking might be workable for entity objects, but it makes no sense for the control and boundary objects that must support those entity objects to keep them pure.
Focus on how the user sees the system. Meyer argues that since the system doesn't exist yet, how can we possibly consider how it will be used? I don't buy it. The whole point of coming up with use cases is to dream about how the system should be used. The emphasis is on the user where it rightly should be. The resulting use cases help make a system that matches the users needs. Meyer is clearly hostile to the user throughout his book.
Favor a functional approach. Jacobson would call this a "function/data approach". Here, I think Meyer has a point. The discovery of objects is pretty vague in OOSE, as is the discovery of operations on those objects via an interaction diagram. I never liked interaction diagrams. However, Meyer does not provide a suitable alternative that is applicable to large systems. Jacobson at least makes an attempt.
As is usually the case in OOSC-2, Meyer is a bit reactionary about this. He says that use cases should be avoided except by teams that have developed systems consisting of "several thousand classes each in a pure O-O language". (Do I detect a bit of exaggeration here?) This, unfortunately, is typical of Meyer, so it is hard to take him very seriously.
Meyer is honest, however. He lets us know that he formed this opinion based on seeing projects that abused use cases. I can certainly see that use cases could be used to justify falling back on old habits. This doesn't mean Jacobson's techniques are bad. It just means some people don't understand them.
Jacobson's process is intended to support large-scale system design. Meyer's process is focused on low-level class design. I find Jacobson's advice much more useful in practice.
While Addison-Wesley seems to be teasing us by listing a second edition of OOSE on their website, I don't believe this will ever happen. Jacobson has already written several more modern books, and Larman's Applying UML and Patterns covers OO design better than even Jacobson could. There just isn't a need for a second edition these days.
The Unified Software Development Process (Jacobson, Booch, Rumbaugh 1999) could be considered OOSE second edition, but it isn't nearly as good as OOSE. It is the book where "Boundary" is first used instead of "Interface". It only contains one running case study, an ATM machine.
Jacobson will be unveiling his Essential Unified Process mid-2006. Hopefully this will yield a new book comparable to OOSE.
Note that there were two versions of the book. The latest is the "revised printing" of 1993.
Jacobson's Website - Find out what he's been up to lately.
* * * * *
What follows are random notes. I'll organize and distill these as time goes by.
Page 31, the case is made for having dynamic objects and information-carrying objects: "systems often exhibit behavior that cannot naturally be assigned to any particular information-carrying object." However, the previous page says, "...function/data methods that separate data from functions have proved, in the long run, to be a house of cards."
Pg 77 "The majority of object-oriented methods today  have only one type of object."
Pg 78, the distinction between physical and conceptual objects is mentioned.
Pg 79. Here are two good pro-O-O arguments that I'm prepared to buy. First, O-O reduces the semantic distance between the problem domain and the code. Second, it is easier (than with function/data models) to identify which areas are more or less likely to change.
The other point that was made a few pages earlier was that a function/data breakdown means the format of the data is known by numerous functions. This means a change to the data affects all of those functions. Encapsulation is intended to reduce this problem.
This book is not an intro to O-O, and Jacobson admits this, pointing the reader to other books for learning O-O.
Pg 115-117 has a great argument for O-O versus function/data in the box.
Pg 123 System Architecture is detailed in the analysis model. Object design is in the design model.
Pg 130 section 6.5.1 is of most interest to me. He proposes three dimensions: Behavior, Information, and Presentation. He then proposes three object types, Entity, Interface, and Control. While these object types each lean toward a single dimension, they are a bit blurry and may contain any of the three dimensions.
Section 6.5 (page 131) covers the Entity/Interface/Control architecture pattern and its variations such as Presentation/Abstraction/Control (PAC) and the 3-tier business model. I've found my own variation on these 3-object patterns to be very helpful over the years, and further research could be interesting. PAC is attributed to Coutaz 1989 Architecture Models For Interactive Software which can be found in the Proceedings of the Third European Conference on Object-Oriented Programming, 1989 (ECOOP 1989) and the 3-tier model is attributed to John Davis and Tom Morgan. Object-oriented development at Brooklyn Union Gas. IEEE Software, 10(1):67--74, January 1993.
Pg 134-135 compares EIC to PAC, MVC, and 3-tier. Plus some other papers I'm not familiar with. Need to focus on this as a source for research.
Pp 135-141 is wonderful. Worth the price of admission, and probably why this book was held in such high regard. This is a beautiful argument for why oo isn't really that much better than function-oriented when it comes to stability in the face of change. This is refreshing as most oo books insist oo is so much better at dealing with change when compared with func-o. Then, Jacobson shows how the hybrid nature of EIC can help make some changes easier. I don't really buy it because sometimes the difficult change is the correct change, while the easy change is wrong for the system. As an example, we may want to implement a "credit interest" transaction. This will probably be different for each account, so creating a separate InterestCreditor object would make no sense. Instead, each account object would need to be changed. In the function/data approach, adding a new transaction for crediting interest would be easy and localized. Function/data wins in this situation.
Still, the object-oriented vs. function/data arguments are worth the price of this outdated book, esp. at $5 like I paid.
By the end of chapter 6, we have an overview of this industrial (heavyweight) OOSE development process. It includes use cases, a domain object model, an interface model, interaction diagrams, an analysis model (EIC objects), and a design model (blocks). The Rational Unified Process (RUP) is based primarily on Jacobson's OOSE/Objectory.
Pg 155 The author keeps emphasizing that the analysis model must not be concerned with the implementation limitations. The book mentioned (which I've seen mentioned quite a bit lately) is McMenamin and Palmer 1984 Essential Systems Analysis.
Pg 167 "Many other object-oriented methods, such as those of Coad and Yourdon (1991) and Booch (1991), focus entirely on [problem domain] models ... we develop an analysis model that is more robust and maintainable in the face of future changes, rather than using a problem domain model to serve as the base for design and implementation."
Pg 191 One control object per use case. This isn't how I do it. I wonder what the code I develop at work would look like given this approach. The problem domain includes emulation of incredibly complex hardware devices containing their own very complex user interfaces. Probably huge and unmanageable. Imagine the thousands of little "use case" classes.
Pg 190 7.3.3 Control Objects is a section to be carefully read and understood. It's a bit tough since it draws on the running example, and some of the wording is a bit unclear.
I must admit that I am quite annoyed by the number of typos I've had to correct. It appears as if the editor fell asleep during certain chapters.
Did Blaha/Rumbaugh have any case studies of note? Just the running ATM example which appears to be quite popular (Jacobson/Booch/Rumbaugh 1999 too).
The warehouse management case study is very well done. It shows how to discover controller objects which are often the most tricky to find.
Pg 387. 4/8/2006 It is interesting that there is no higher abstraction being brought into play. Every object can talk to every other object. So views talk directly to entities, and controllers if needed. My approach is a little more disciplined, and it might be because I am coding without a model, so I'm making up the structure as I go along. It is nice to have some pattern to work toward, even if it is arbitrary.
My boundary (UI) classes are different from those presented here, because mine serve only to hide the ugliness of the OS (MFC). My control objects contain the behavior that would have been in the UI classes in an OOSE design. Although, since at design time, these "objects" become blocks that may consist of one or more programming objects, one could envision a boundary block actually consisting of two objects, one that is solely MFC, and the other that contains the UI behavior.
As for the Control/Entity side of things, I believe the problem lies in the fact that I am not modeling the real world. The only way I could would be to start from the block diagram of the system I'm emulating and create objects based on those blocks. Instead, I am starting with a growing mess of use-cases, and no domain model to go with them. So, I am creating a new structure of objects that mimic an imaginary world that I devise to explain the behavior of the system being emulated. This makes it somewhat hard to decide what objects there are, so I tend to just lump everything together into a fat entity object global to everyone. When you have no idea what the system looks like, and what the requirements are, this is about all you can do. Perhaps it is that my understanding is based on the function of the system, so a function-oriented approach is best until some structure can be found.
If you know nothing about the internal workings of a system you are eumlating, then the system is nothing more than a view of the World object which is a well-defined entity object from the domain. Since the entire system is merely a view, there is no need for any objects of any kind. This is incredibly unrealistic, however, as my code currently stands at 18,000 lines for this emulation. Imagine 18,000 lines, and a good excuse for absolutely no structure of any kind.
It is possible to condense objects out of the seeming randomness once patterns of usage begin to appear. In fact, when the system is done, this is relatively easy. Maybe there is a pattern here. For a completely mysterious system, start with a simple monolithic structure, then condense objects out of that structure as the system is better understood. Sounds a lot like refactoring.
I wonder how PAC differs from this. I think the main difference is that PAC has one controller per interface and entity, while OOSE recommends one controller per use case or interesting piece of functionality.
When one is creating an object structure from one's imagination, where do the functions naturally belong? There is no real-world counterpart to refer to. So, I guess ease of modification is the only guidance. In the emulator I work on, what functionality belongs in the control objects, and what belongs in the entity objects?
Pg 391 admits that the interaction diagrams are incomplete as they do not cover the internals of a block. This means two things. First, MDA is impossible, and second, you can do whatever you want within a block. This is really fascinating given that my emulator would probably show up as a single boundary (interface) object in a diagram. That means that I can do whatever I want. At 18,000 lines, I think not.
Pg 414 Combining entity and interface objects for a UI/data pair is suggested. However, this is avoided in the name of robustness. Unfortunately, the reason given is impenetrable. It would have been nice to have a more concrete defense rather than what appears as generic hand-waving.
Pg 423 persistence and database within an object. An interesting idea to consider. Imagine a class Person with a FindInAddressBook() function. This is really bizarre as it encapsulates a database within a record. This sounds completely backwards. I think it was mentioned in chapter 10 also, but I skimmed that as it wasn't very interesting to me. The claim is that this increases robustness by hiding how the lookup is done. But this is wrong since we can just as easily hide how this is done in a separate class.
The case studies are hard to understand in places because they are based on more information than is presented in the text. Annoying, but workable.
The chapter 14 telecom example is in Smalltalk! Painful on the eyes.
Chapter 16 is where we see that this book was the most important work on OO at the time. It may still be, and we would really benefit from an update. Does Larman give as complete an overview as Jacobson? It is very clear that the Rational "Unified" Process is really Jacobson's Objectory, with Booch and Rumbaugh allowed to come along for the ride. Perhaps Aspect-Oriented Software Development with Use Cases (Jacobson 2004) is really his next book. Need to work it into my reading list.
I really liked the concept of robustness (to future change) analysis. This plays directly into my belief that OO alone cannot make a system resilient in the face of change. Instead, we need to anticipate change and design based on our suspicions.
Read from March 23, 2006 to April 16, 2006.
<- Back to my software books page.Copyright ©2006, Ted Felix. Disclaimer