Its very interesting for me which advantages gives “global root class” approach for framework.
In simple words what reasons resulted the .NET framework was designed to have one root object class with general functionality suitable for all classes.
Nowadays we are designing new framework for internal use (the framework under SAP platform) and we all divided into two camps – first who thinks that framework should have global root, and the second – who thinks opposite.
I am at “global root” camp. And my reasons what such approach would yields good flexibility and development costs reduction cause we will not develop general functionality any more.
So, I’m very interested to know what reasons really push .NET architects to design framework in such way.
14
The most pressing cause for Object
is containers (prior to generics) which could contain anything instead of having to go C-style “Write it again for everything you need”. Of course, arguably, the idea that everything should inherit from a specific class and then abusing this fact to utterly lose every speck of type safety is so terrible that it should have been a giant warning light on shipping the language without generics, and also means that Object
is thoroughly redundant for new code.
6
I remember Eric Lippert once saying that inheriting from the System.Object
class provided ‘the best value for the customer’. [edit: yep, he said it here]:
…They don’t need a common base type. This choice was not made out of necessity. It was made out of a desire to provide the best value for the customer.
When designing a type system, or anything else for that matter, sometimes you reach decision points — you have to decide either X or not-X… The benefits of having a common base type outweigh the costs, and the net benefit thereby accrued is larger than the net benefit of having no common base type. So we choose to have a common base type.
That’s a pretty vague answer. If you want a more specific answer, try asking a more specific question.
Having everything derive from the System.Object
class provides a reliability and usefulness that I’ve come to respect alot. I know that all objects will have a type (GetType
), that they’ll be in line with the CLR’s finalization methods (Finalize
), and that I can use GetHashCode
on them when dealing with collections.
13
I think that Java and C# designers added a root object because it was essentially free to them, as in free lunch.
The costs of adding a root object are very much equal to the costs of adding your first virtual function. Since both CLR and JVM are garbage collected environments with object finalizers, you need to have at least one virtual for your java.lang.Object.finalize
or for your System.Object.Finalize
method. Hence, the costs of adding your root object are already prepaid, you can get all the benefits without paying for it. This is the best of both worlds: users who need a common root class would get what they want, and users who could not care less would be able to program as if it’s not there.
Seems to me you have three questions here: one is why a common root was introduced to .NET, the second is what are the advantages and disadvantages of that, and the third is whether it’s a good idea for a framework element to have a global root.
Why does .NET have a common root?
In the most technical sense, having a common root is essential for reflection and for pre-generics containers.
In addition, to my knowledge, having a common root with base methods such as equals()
and hashCode()
was received very positively in Java, and C# was influenced by (among others) Java, so they wanted to have that feature as well.
What are the advantages and disadvantages of having a common root in a language?
The advantage of having a common root:
- You can allow all objects to have a certain functionality you consider important, such as
equals()
andhashCode()
. That means, for example, that every object in Java or C# can be used in a hash map – compare that situation with C++. - You can refer to objects of unknown types – e.g. if you just transfer information around, as pre-generic containers did.
- You can refer to objects of unknowable type – e.g. when using reflection.
- It can be used in the rare cases when you want to be able to accept an object that can be of different and otherwise unrelated types – e.g. as a parameter of a
printf
-like method. - It gives you the flexibility of changing the behavior of all objects by changing just one class. For example, if you want to add a method to all objects in your C# program, you can add an extension method to
object
. Not very common, perhaps, but without a common root it would not be possible.
The disadvantage of having a common root:
- It can be abused to refer to an object of some value even if you could have used a more accurate type for it.
Should you go with a common root in your framework project?
Very subjective, of course, but when I look at the above pro-con list I’d definitely answer yes. Specifically, the last bullet in the pro list – flexibility of later changing the behavior of all objects by changing the root – becomes very useful in a platform, in which changes to the root are more likely to be accepted by clients than changes to an entire language.
Besides – though this is even more subjective – I find the concept of a common root very elegant and appealing. It also makes some tool usage easier, for example it’s now easy to ask a tool to show all the descendants of that common root and receive a quick, good overview of the framework’s types.
Of course, the types do have to be at least slightly related for that, and in particular, I’d never sacrifice the “is-a” rule just to have a common root.
1
Mathematically, it’s a bit more elegant to have a type system which contains top, and allows you to define a language a bit more completely.
If you’re creating a framework, then things necessarily will be consumed by an existing codebase. You can’t have a universal supertype since all of the other types in whatever consumes the framework exist.
At that point, the decision to have a common base class depends on what you’re doing. It is very rare that many things have a common behavior, and that it is useful to reference these things via the common behavior alone.
But it happens. If it does, then go right ahead in abstracting that common behavior.
The pushed for a root object because they wanted all classes in the framework to support certain things (getting a hash code, converting to a string, equality checks, etc.). Both C# and Java found it useful to put all these common features of an Object in some root class.
Keep in mind that they aren’t violating any OOP principles or anything. Anything in the Object class makes sense in any subclass (read: any class) in the system. If you choose to adopt this design, make sure you follow this pattern. That is, don’t include things in the root that don’t belong in every, single class in your system. If you follow this rule carefully, I don’t see any reason why you shouldn’t have a root class that contains useful, common code for the whole system.
1
A couple of reasons, common functionality all child classes can share. And it also gave the framework writers the ability to write a lot of other functionality into the framework everything could use. An example would be the ASP.NET caching and sessions. Almost anything can be stored in them, because they wrote the add methods to accept objects.
A root class can be very alluring, but it is very, very easy to misuse. Would it be possible to have a root interface instead? And just have one or two small methods? Or would that add a lot more code than is required to your framework?
I ask I am curious about what functionality you need to expose to all the possible classes in the framework you are writing. Will that truly be used by all the objects? Or by most of them? And once you create the root class, how will you prevent people from adding random functionality to it? Or random variables they want to be “global”?
Several years ago there was very large application where I created a root class. After a few months of development it was populated by code that had no business being in there. We were converting a old ASP application and the root class was the replacement to the old global.inc file we had used in the past. Had to learn that lesson the hard way.
All standalone heap objects inherit from Object
; that makes sense because all standalone heap objects must have certain common aspects, such as a means of identifying their type. Otherwise, if the garbage-collector had a reference to a heap object of unknown type, it would have no way of knowing what bits within the blob of memory associated with that object should be regarded as references to other heap objects.
Further, within the type system, it is convenient to use the same mechanism for defining the members of structures and the members of classes. The behavior of value-type storage locations (variables, parameters, fields, array slots, etc.) is very different from that of class-type storage locations, but such behavioral differences are achieved in the source-code compilers and execution engine (including the JIT compiler) rather than being expressed in the type system.
One consequence of this is that defining a value type effectively defines two types–a storage-location type and a heap-object type. The former may be implicitly converted to the latter, and the latter may be converted to the former via typecast. Both types of conversion work by copying all public and private fields from one instance of the type in question to another. Additionally, it is possible using generic constraints to invoke interface members on a value-type storage location directly, without making a copy of it first.
All of this is important because references to value-type heap objects behave like class references and not like value types. Consider, for example, the following code:
string testEnumerator<T>(T it) where T:IEnumerator<string> { var it2 = it; it.MoveNext(); it2.MoveNext(); return it.Current; } public void test() { var theList = new List<string>(); theList.Add("Fred"); theList.Add("George"); theList.Add("Percy"); theList.Add("Molly"); theList.Add("Ron"); var enum1 = theList.GetEnumerator(); IEnumerator<string> enum2 = enum1; Debug.Print(testEnumerator(enum1)); Debug.Print(testEnumerator(enum1)); Debug.Print(testEnumerator(enum2)); Debug.Print(testEnumerator(enum2)); }
If the testEnumerator()
method is passed a storage location of value type, it
will receive an instance whose public and private fields are copied from the passed-in value. Local variable it2
will hold another instance whose fields are all copied from it
. Calling MoveNext
on it2
will not affect it
.
If the above code is passed a storage location of class type, then the passed-in value, it
, and it2
, will all refer to the same object, and thus calling MoveNext()
on any of them will effectively call it on all of them.
Note that casting List<String>.Enumerator
to IEnumerator<String>
effectively turns it from a value type to a class type. The type of the heap object is List<String>.Enumerator
but its behavior will be very different from the value type of the same name.
5
This design really traces back to Smalltalk, which I would view largely as an attempt at pursuing object orientation at the expense of nearly any and all other concerns. As such, it tends (in my opinion) to use object orientation, even when other techniques are probably (or even certainly) superior.
Having a single hierarchy with Object
(or something similar) at the root makes it fairly easy (for one example) to create your collection classes as collections of Object
, so it’s trivial for a collection to contain any kind of object.
In return for this rather minor advantage, you get a whole host of disadvantages though. First, from a design viewpoint, you end up with some truly insane ideas. At least according to the Java view of the universe, what do Atheism and a Forest have in common? That they both have hashcodes! Is a Map a collection? According to Java, no, it’s not!
In the ’70s when Smalltalk was being designed, this sort of nonsense was accepted, primarily because nobody had designed a reasonable alternative. Smalltalk was finalized in 1980 though, and by 1983 Ada (which includes generics) was designed. Although Ada never achieved the kind of popularity some predicted, its generics were sufficient to support collections of objects of arbitrary types — without the insanity inherent in the monolithic hierarchies.
When Java (and to a lesser extent, .NET) were designed, the monolithic class hierarchy was probably seen as a “safe” choice — one with problems, but mostly known problems. Generic programming, by contrast, was one that almost everybody (even then) realized was at least theoretically a much better approach to the problem, but one that many commercially oriented developers considered rather poorly explored and/or risky (i.e., in the commercial world, Ada was largely dismissed as a failure).
Let me be crystal clear though: the monolithic hierarchy was a mistake. The reasons for that mistake were at least understandable, but it was a mistake anyway. It’s a bad design, and its design problems pervade almost all code using it.
For a new design today, however, there’s no reasonable question: using a monolithic hierarchy is a clear mistake and a bad idea.
4
What you want to do is actually interesting, it’s hard to prove if it’s wrong or right until you are done.
Here are some things to think about:
- When would you ever deliberately pass this top-level object over a more specific object?
- What code do you intend to put into every single one of your classes?
Those are the only two things that can benifit from a shared root. In a language it is used to define a few very common methods and allow you to pass objects that haven’t been defined yet, but those shouldn’t apply to you.
Also, from personal experience:
I used a toolkit once written by a developer who’s background was Smalltalk. He made it so all his “Data” classes extended a single class and all the methods took that class–so far so good. The problem is the different data classes weren’t always interchangable, so now my editor and compiler couldn’t give me ANY help as to what to pass into a given situation and I had to refer to his docs which didn’t always help..
That was the most difficult to use library I’ve ever had to deal with.
Because that’s the way of the OOP. It’s important to have a type (object) that can refer to anything. An argument of type object can accept anything. There’s also inheritance, you can be sure that ToString(), GetHashCode() etc are available on anything and everything.
Even Oracle has realised the importance of having such a base type and is planning on removing primitives in Java near 2017
5