C# Under The Microscope 389
To Begin at the Ending
I'm a big fan of programming languages, possibly more than of actual programming. Every once in a while I hear about this new language that is just "brilliant", that "does things differently" and that "takes a whole different approach to programming". I typically then take the necessary time off my regularly scheduled C++ programming, learn enough about the language to get excited about the new one, but not enough to actually do anything useful with it, rave about it for a couple days, and then quietly and without protest go back to my C++ programming.
And so, when I learned of Microsoft's new up-and-comer, C# (pronunciation: whatever), I became duly excited and went forth to learn as much about it as possible.
Last things first: On paper, C# is very interesting. It does very little that's truly new and innovative, but it does do several things differently, and through this paper I hope to explore and present at least some the more important differences between C# and its obvious influences: C++ and Java. So, skipping the obligatory Slashdot "speaking favorably of Microsoft" apology, let's talk about C#, the language.
How is it like Java/C++?
In the look & feel department, C# feels very much like C++. More so than even Java. While Java syntax borrows much of the C++ syntax, some of the corresponding language constructs have a slightly different form of use. While this is hardly a complaint, it's interesting to note that the designers of C# went a little further in making it look like C++. This is good for the same reason it was good with Java. Being a professional C++ programmer, I use C++ way more than any other language. Eiffel, for instance, has a much cleaner syntax than either C++, C# or Java, and at face value it does seem as though one should bear with new syntax if this is going to lead to cleaner, more easily understandable code, but for an old dog like myself, not having to remember so much new syntax when switching to another language is nothing short of a blessing.
C# borrows much from Java, a debt which Microsoft has not acknowledged, and possibly never will. Just like Java, C# does automatic garbage-collection. This means that, unlike with C and C++, there is no need to track the use of created objects, since the program automatically knows when objects are no longer in use and eventually destroys them. This makes working with large object groups considerably simpler, although, there have been a few instances where I was faced with a programming problem where the solution depended on objects *not* being automatically destroyed, as they were supposed to exist separate from the main object hierarchy and would take care of their own destruction when the time was right. Stroustrup's vision of automatic garbage-collection for C++ sees automatic garbage-collection as an optional feature, which might make the language more complicated to use, but would allow better performance and increased design flexibility.
One interesting way in which C# deals with the performance issues involved with automatic garbage collection is that of allowing you to define classes whose objects always copy by value, instead of the default copy by reference, which means there is no need to garbage- collect such objects. This is done, confusingly enough, by defining classes instead as structs. This is very different from C++ structs, which are defined in exactly the same way; C++ structs are just classes where members are public by default, instead of privately. Another idea that was lifted directly off Java, and one which turned out to be very controversial is that of multiple inheritance. In what seemed like a step backwards, Java did not allow you to define classes that inherit from one than one class. Java did let you define "interfaces", which work like C++ abstract classes, but were semantically clearer: an interface is a functional contract that declares one or more methods. A class can choose to "sign" such a contract by inheriting it, and providing a working implementation for every method that the interface declares. In Java, you can inherit as many interfaces as you want. The rationale to all this being that multiply inheriting more than one class raises too many possible problems, most notably that of clashing implementations and repeated inheritance. On a side note, the cleanest separation between interface and implementation that I know of is that of Sather, where classes can provide either implementation or interface, but not both.
So what else is new?
One new feature that I mentioned already was that of copy-by-value objects. This seemingly small improvement is a potentially huge performance saver! With C++, one is regularly tempted to describe the simplest constructs as classes, and in so doing make it safer and simpler to use them. For example, a phone directory program might define a phone record as a class, and would maintain one PhoneRecord object per actual record. In Java, each and every one of those objects would be garbage collected! Now, Java uses mark-and-sweep in order to garbage collect. The way that this is done is this: the JVM starts with the program's main object, and starts recursively descending through references to other objects. Every object that is traversed is marked as referenced. When this is done, all of the objects that aren't marked are destroyed. In the phone book program, especially if there are thousands and thousands of phone records, this can drastically increase the time that it takes the JVM to go through the marking phase. In C#, you'd be able to avoid all this by defining PhoneRecord as a struct instead of a class.
Another thing that C# does better than Java is the type-unification system. In Java, all classes are implicitly descendents of the Object class, which supplies several extremely useful services. C# classes are also all eventual descendents of the object class, but unlike Java, primitives such as integers, booleans and floating-point types are considered to be regular classes. Java supplies classes that correspond with primitive types, and mapping an object-value to a primitive value and vice versa is very simple, but C# makes it that much simpler by eliminating that duplicity.
Personally, I found C# support of events to be a very exciting new feature! Whereas an object method operates the object in a certain way, object events let the object notify the outside world of particular changes in its state.. A Socket class, for instance, might define a ReadPossible event or a data object might release a DataChanged event. Other objects may then subscribe for such an event so that they'd be able to do some work when the event is released. Events may very well be considered to be "reverse- functions", in the sense that rather than operate the object, they allow the object to operate the outside world, and in my programming experience, events are almost as important as methods themselves.
While you could always implement events in C by taking pointers to functions, or optionally in C++ and Java by taking objects that subclass a corresponding handler type, C# allows you to define class events as regular members. Such event members can be defined to take any delegate type. Delegates are the C# version of function pointers. Whereas a C function pointer consists of nothing but a callable address, a delegate is an object reference as well as a method reference. Delegates are callable, and when called, operate the stored method upon the stored object reference. This design, which may seem less object-oriented than the Java approach of defining a handler interface and having subscribers subclass the interface and instantiate a subscriber, is considerably more straightforward and makes using events nearly as simple as invoking object methods.
Events are one example of how C# takes a popular use of pre-existing object-oriented mechanisms and makes it explicit by giving it a name and logic of its own. Properties are another example, even though they're not as much of a labor-saver as events are. It is very commonplace in C++ to provide "getters" and "setters" for private data members, in order to provide controlled access to them. C# treats such "protected" data members as Properties, and the declaration syntax of properties is such that you have to provide getter and setter functions for each property. In fact, properties do not have to correspond to real data members at all! They may very well be the product of some calculation or other operation.
And then, by far the ugliest, most redundant and hard-to-understand language construct in C# is the Attribute. Attributes are objects of certain types that can be attached to any variable or static language construct. At run-time, practically anything can be queried for the value of attributes attached to it. This sounds like the sort of hack someone would work into a language ten years after it's been in use and there was no other way to do something important without breaking backwards compatibility. Attributes are C#'s version of Java reflection, but with none of the elegance and appropriateness. In general, and especially in light of C#'s overall design, the Attributes feature is out of place, and inexcusable.
What is it missing?
Being an unborn language, there is much that C# does not yet promise to deliver, and for which it can't be criticized. First of all, there is no telling just how well it would perform. Java is, in many ways, the better language but one of the prime reasons it's been avoided is its relatively slow performance, especially compared to corresponding C and C++ implementations. It's not yet clear whether C# programs would need the equivalent of a Java Virtual Machine or whether they could be compiled directly into standalone executables, which might positively affect C#'s performance and possibly even set it as a viable successor to C++, at the very least on Windows. While there is much talk of C# being cross-platform, it is unclear just how feasible implementing C# on non- windows platforms is going to be. The required .NET framework consists of much that is, at least at the moment, Windows specific, and C# relies heavily on Microsoft's Component Object Model. All things considered, setting up a proper environment for C# on other platforms should prove to be a massive undertaking, that perhaps none other than Microsoft can afford.Furthermore, while there is mention of a provided system library, it's not clear what services such a library would provide. C++ provides a standard library that allows basic OS operations, the immensely useful STL and a powerful stream I/O system with basic implementation for files and memory buffers. The Java core libraries go much further by providing classes for anything from data structures, to communications, to GUI. It is yet to be seen how C#'s system library would fare in comparison.
One thing that's sure to be missing from C#, and very sadly at that is any form of genericity. Genericity, such as it is implemented in C++, allows one to define "types with holes". Such types, when supplied with the missing information, are used to create new types on the spot, and are therefore considered to be "templates" for types. A good example of a useful type template is C++'s list, which can be used to create linked-lists for values of any type. Unlike a C linked-list that takes in pointers to void or a Java linked list that takes Object references, a list instantiated from the C++ list template is type-safe. That is to say, it would only be able to take in values of the type for which it was instantiated. While it is true that inheritance and genericity are often interchangeable, having both makes for a safer, possibly faster development platform.
The designers of C# have admitted the usefulness of genericity, but also confessed that C# is not going to support genericity on first release. More interestingly, they are unhappy with C++'s approach to genericity, which is based entirely on templates. It would be interesting to see what approach C# would take towards the concept, seeing as templates are pretty much synonymous with genericity at the moment.
To sum it up
Many now refer to C# as a Java-wannabe, and there is much evidence to support this notion. C# doesn't only borrow a number of ideas from Java. It seems to follow up on Java's sense of clean design. It's a somewhat sad observation then that C#, purely as a language, not only provides a fraction of the innovation and daring that Java did, it also falls just a little behind Java where cleanliness and simplicity are concerned. However, if you're someone like myself, who uses Windows as their primary development platform and needs to use C or C++ because he cannot afford the overhead that Java incurs, it's possible that C# would turn out to be a very beneficial compromise.
Wrong Direction (Score:3)
C was created specifically to emulate a "universal cpu" and to make it easier to write software across platforms. C++ extended that mission with the added benefit of reusable code (cross-application). C# is a step in the wrong direction if it pretends to be a language in the same class as others with 'C' in the name.
As far as I can see it's main innovations are little more than invisible methods.
--------
Yeah, I'm a Mac programmer. You got a problem with that?
Re:Apple releases a new language called C#++ (Score:3)
(Note: Having to explain this joke means it is a great failure.)
Re:This level of language... (Score:2)
"I mean, what would be the advantage??"
There probably won't be a technical advantage, but MS seldom creates things because they'll have a technical advantage; MS creates things that give MS a strategical advantage. In this case, MS needed something of their own to tie into application development for their .NET "vision". .NET has the potential to make vast amounts of money for MS (that will make their current revenues look like small change) but they need as much lock-in as possible. If a whole generation of programmers learns C# instead of C++ and Java (just wait for the deals that MS makes with Universities, it'll happen soon enough: "we'll donate hardware to your university if you teach C# in the courses") then those developers are primed for .NET development.
Sure, MS could use C++ or Java for .NET, but then developer skills are general enough for those developers to use anywhere.
Of course, it could be I don't know what I'm talking about, because it's difficult to really tell what C# will be about, what with all the hype surrounding it.
Re:Where do functions fit in? (Score:3)
> Now, Java uses mark-and-sweep in order to garbage collect.
No, it doesn't specify any such thing. You're perfectly welcome to use any collection system for Java objects you like, including high-performance generational/copying collectors.
As it happens, the bulk of Java implementations do use mark-sweep as part of a conservative collection approach, because of the need to interact with code that is not GC-aware. That's easiest to structure as a simple mark/sweep pre-pass to the real collection phase, which can do pretty much anything it wants that doesn't involve moving or freeing the conservatively blacklisted objects.
I have a GC library for C++ I've written which works exactly like this - mark/sweep conservative phase, generational copy collector phase - and works just fine.
As for the copy-by-value/copy-by-reference distinction, template library authors already make this distinction for performance (at least I do in mine) and provide simple ways to annotate classes so templates expand to by-reference versions. That said, the biggest problem with that is that even in MSVC++6.0, the template support is still so broken that you spent more time fighting internal compiler errors that coding :-(
Back to the article being replied to:
> Part of the uniqueness of C# is its conception of code reuse - for instance, instead of purchasing a commercial garbage-collector for your C++ code, you get one for free from C#.
Huh? It's "unique" to do something that most every programming language outside the Algol/Pascal/C family has done from day one?
> But where does this garbage collector reside?
It's in the language run-time, which can be wherever the implementation gives you the option of putting it. Y'know, like malloc ().
In case you've never seen one, a garbage collector is not a big piece of code - a simple but perfectly effective one is typically much smaller than the equivalent malloc () code. For high-performance allocator implementations (like the impressive Hoard from Paul Wilson's group at UTexas, where allocation performance of all kinds are studied), expect a GC and a manual allocator to be of roughly similar overall size and complexity.
Microsoft has made a C# compiler available (Score:2)
The SDK preview includes a copy of the C# compiler with Win2k Professional. (Note that the SDK does not include the Visual Studio 7 preview, but it does include "ASP+, the Common Language Runtime, documentation, samples, tools, and command line compilers.")
Microsoft also has some public newsgroups [microsoft.com] (hosted on "msnews.microsoft.com") for discussions about the .NET frameworks, C#, C++, VB, etc. And DevelopMentor is also hosting a
.NET mailing list [develop.com].
The August 2000 issue of MSDN Magazine [microsoft.com] is also featuring an article about C# [microsoft.com].
Re:A look at C# (Score:2)
Closures... (Score:2)
Events are one example of how C# takes a popular use of pre-existing object-oriented mechanisms and makes it explicit by giving it a name and logic of its own.
Except that it already existed decades ago in Lisp under the explicit name of "closures", exists also in Python under the name "bound methods", and exists in general in dynamically typed languages under the idea of "protocol". The difference is, in C#, you have type checking, so you have to declare the signature.
Delegate is rather an example of how "Those who don't use Lisp are doomed to reimplement it." :-)
Re:This level of language... (Score:2)
Decades? The first real language to fit this description is Java. Pascal came before C, if that's what you're thinking of. Modula-2 and Oberon were after C, but they descended from Pascal, not C. Ada? It wasn't nearly as low-level as C, and had more modern features (e.g. packages). Objective C? That was a merging of C and OOP that went down a different path than C++.
Not sure what you're talking about here.
Re:Go get C# at Microsoft! (Score:2)
Re:Geeks of all flavors (Score:2)
I don't know what your philosophy is, but I wouldn't want to tie my career to one company. Computer Science is not supposed to be learning the language of the day, but the fundamental paradigms and algorithms to allow you to pick up any language.
Though my personal preference is obviously Linux, in a pinch I could become a Solaris or Irix guy. With my UNIX background I can transition to NT easier than an NT guy could to UNIX. The tech world changes so quickly it is best to be flexible and to keep an open mind.
The way I feel right now though, I'd rather take a lesser paying job doing UNIX stuff and be happy than make more money doing MS and be miserable. Each to his own though. :)
Objective-C, NeXTStep, OpenStep, Mac OS X, and C# (Score:5)
Inheritance and Interfaces
Objective-C in the OpenStep/Mac OS X environment has single inheritance from a base class (NSObject), and protocols, which are precise counterparts to Java's interfaces. I have run into situations, however, where multiple inheritance is exactly what is required, and using interfaces meant that I had re-write the exact same code more than once, as I was implementing a group of specialized collection classes in Java. There were two axes of differentiation: mutability = (immutable, mutable), and ordering (partially ordered, ordered, strictly ordered). There was a lot of code that had to be duplicated that I should have been able to inherit from two abstract superclasses, one for mutability, and one for ordering. (*grumble*)
Garbage Collection and Memory Management
Objective-C provides a semi-automatic reference-counted garbage collection mechanism that is amenable to programmer intervention to increase efficiency, through a construct called an Autorelease Pool. Every object has a retain count, which can be incremented or decremented. The object's retain count starts at one, and when an object's retain count goes down to zero it is garbage collected. Note that this happens the instant that the retain count drops to zero, not during a mark/sweep. However, you may need to pass an object on to another part of your app, but your code does not need/want to retain it. What you do instead is tell the object to auto-release. It is then put into the autorelease pool, and later on during the system's garbage collection each object in the autorelease pool is sent a release message. Some objects that are entered in the autorelease pool still have a retain count (as they are being retained by other objects) and are simply removed from the autorelease pool; others have their retain counts drop to zero and are garbage collected.
You can fine-tune this mechanism to a high degree, by putting your own autorelease pool in the stack ahead of the system's primary autorelease pool. For instance, suppose you know that you will be allocating a whole bunch of objects for use in a part of your program, and after you exit you will never need them again. Well, you can put your own autorelease pool in for the system's autorelease pool at the start of that section of your code, write normal code, then remove and release your private autorelease pool and put back the system autorelease pool, which release all of the objects you created in your little section of code. Conversely, if you want an object to stick around, just don't ever release or autorelease it.
However, from a business standpoint, I find that the automated garbage collection and never having to worry about memory allocation issues is a strong point of Java. It allows me to code more complex applications and avoid memory debugging issues that invarable bedevil complex Objective-C and C++ programs. I can get a WebObjects application to a customer much more quickly using Java than using Objective-C, with quicker turnaround and more feedback cycles.
Events, Notifications, and Delegation
The OpenStep and Mac OS X operating systems (viewed separately from the Objective-C language, as these features are available from Java as well) have long had notifications and delegates. There is a system-wide notification center, objects can define notifications that they will post in response to certain events, and objects can register to receive particular events or classes of events. This mechanism has been in place for a long time.
Delegation is a bit more tightly tied to Objective-C, as objects in Obj-C can pass messages (i.e. method calls) onto to other objects, and objects can "pose as" other objects. An object can register to be the delegate of another object (in Java, the delegator object needs to make special provision for this), and there are "informal protocols" or "informal interfaces" defined that indicate the possible messages a delegate might receive from its delegator. Again, this is not new, and its assembly into a single OS is not new.
Primitive Types
This is one feature that I like very much, and wish that Java had. Objective-C, of course, will always have to support native types such as char's and int's, as it is defined as a superset of C. However, Java had the opportunity to remove this artificial distinction, and has caused lots of cursing from yours truly over the past couple of years.
Compiling to Native Code
I would point out here that compiling to native code may not result in the fastest execution. Review the HP Dynamo project [arstechnica.com], as written up on Ars Technica, for the reasons why JITC can actually exceed the speed of native code. The whole Transmeta Crusoe architecture is built around this theory of operation, and no one will claim that it's too slow.
Genericity
Amen to this. The fact that genericity is missing from Java is a serious gripe of mine, and the fact that it is missing from C# is a serious omission. This business of casting objects coming out of arrays is a pain the in neck, and it is often tough to find out where an object of the wrong type went into an array, although on the cast coming out you get a ClassCastException. Far better to catch the problem when the object goes in, which often gives you a better idea of where your design is broken. One of these days I am really going to have to start using the stuff coming out of the GJ project [bell-labs.com].
Conclusions
Overall, I find that the "new" stuff in C# is really old stuff. Furthermore, this is not the first time that all of this has been pulled together in one place. Almost all of this has been in the NeXTStep/OpenStep/Mac OS X family for a long time, and the implementations there are quite mature. I suspect that the implementations in C# will require several revisions before they reach the levels that programmers can really use.
Just so everyone knows, I am a Consulting Engineer working for Apple iServices, a part of Apple Computer, specializing in WebObjects development. These opinions are my own, however, and not those of Apple.
--Paul
Re:java's overhead (Score:4)
This was not sloppy code. I made extensive use of caching, pre-rendering and Java2d to make the most of the platform. Java simply has performance overhead -- the overhead of Swing (sloppy, sloppy -- it's not even well built, consider for instance that affinexforms were hacked on but don't apply heirarchically and thus can't zoom Swing UI elements) is huge, but there is overhead in JAva2d (JNI is slow, especially for copying chunks of data back and forth) and some additional overhead in the basic design.
In Java, it is almost impossible to write cache-friendly code. If you build things in an OO fashion, you cannot force locality, since object refs force you to essenaitally chase pointers for every object. If you write degenerate code that isn't OO (sort of misses the point) then the array bounds checks hammer you anyway (and no, I have yet to see this eliminated by "smart compilers").
Java has some inherent problems with performance. These are real, they exist, and they are fundamental to the platform.
Consider that in JAva, you have to have a thread per socket connection. Yes, I'm serious. There is no select, there is no poll. This means that a messaging server on Java can maybe serve 3000 clients before it starts to fall apart, but something in C++? Trivial to serve 20,000. You don't even need to optimize it to get that level of scalability that even optimized JAva can't do.
Consider the weirdness that Java can spawn a child process but not attach to a process that's already running (easy to do in C [C++, C, C#]). How do you write a watchdog process in JAva whan you can't kill a process that's hung?
Java is great. I've used it extensively. But it is seriously warped in some ways.
RSR
java (Score:2)
Clue time (Score:2)
Casting from Object is not comparable to C++ templates. Firstly, you lose any compile time type checking (which C++ templates give you). On the contrary, the Java approach is the hack. Don't take my word for it, read interviews with Gosling where he admits as much.
Where do functions fit in? (Score:2)
Instead of a Microscope... (Score:3)
From all the reviews I've read, "See Sharp" doesn't.
Re:Ack! Significant whitespace! (Score:3)
You may not always be safe in C++ [att.com]. (Acrobat format ;)
___
Re:For all the bashing C# gets here... (Score:3)
Other than that, you're just regurgitating the typical boring Slashdot opinion set in a highly overdramatic style. To call a language dangerous, a corporation evil and to literally identify a real manager with the PHB from Dilbert is at best inaccurate, and at worst displays a shocking seperation between you and reality. The world is not as black and white as you would like it to be.
Re:OK, so how's performance? (Score:2)
Concerning the ASP+ stuff, though, there are a ton of things built in to increase performance. For example, current ASP pages are scripted... they are interpretted, an execution path stored in memory... when the cache maxes out, the plan for the least used script is dumped and if that page is visited again, it needs to be re-interpretted. ASP+, however, creates a "compiled" version and caches it to disk. (The compiled version is the bytecode in the .NET CLR.) For a good article on ASP+ be sure to check out this piece I wrote:
ASP+: An Introduction and My Views on the Matter
http://www.4guysfromrolla.com/ webtech/071500-1.shtml [4guysfromrolla.com]
Happy Programming!
Re:For all the bashing C# gets here... (Score:2)
Re: (Score:2)
Re:For all the bashing C# gets here... (Score:2)
jumping off point (Score:2)
Many people I know here at CMU found ML tough to wrap their heads around. I think it is a wonderful language, and I plan on using it for as many projects as I can in the future.
Ben
Not really a VM (Score:3)
What it seems like is a cross-platform distribution but with native compilation upon installation. Sorta a best of both worlds kinda thing...
It should have been called. (Score:3)
C~1
The Be Sharps (Score:2)
Apu: "How about "The Be Shaprs?"
Skinner: "Perfect."
Homer: "The Be Sharps."
Re:I don't understand this attitude at all. (Score:2)
Re:java's overhead (Score:2)
Marajuana explaining Windows Errors? (Score:3)
Microsoft says it's supposed to be pronounced C (sharp). But I've almost always heard "#" called the hash mark. Regardless of what the PR folks say I believe that M$'s developers really meant it to be pronounced see-hash. Could this indication of an obsession with pot among Microsoft's developers be an explanation of the buggy history of Windows? I don't know but it does explain things...
C# (read as...) (Score:2)
Re:Somewhat related to both points... (Score:2)
Thanks for the pointer to Cugar- I've not looked at it yet, but hopefully I can use it to escape the hell known as C++ this coming school year... Bloody professors won't let me use Objective-C or Smalltalk, but I'll show them!
Totally agree - when will OO die? (Score:3)
I've done extensive Java programming since it was v0.9, and C++ programming for about six years, and my opinion is that most of the OO stuff is complete mumbo-jumbo that only serves to confuse the core programmer and others who try using their code.
One rule that has served me well throughout the years is that one should never use a tool more complicated than the problem demands. Many OO programmers throw this rule out the window, and spend weeks playing with Factory patterns, polymorphism, huge inheritance hierarchies, and all sorts of other junk that creates bloated, useless code.
At the very least, C++ allows me to limit the amount of OO I introduce into programs. Java seems to be as retarded as Smalltalk when it comes to this.
Even for "internal" programs that don't require full-out performance, I can bang out a perl solution in half the code it takes a Java programmer do write. I have to wonder how Java programmers keep from going insane. The language and object hierarchy are so verbose that it takes at least twice as many lines of code to get anything done as any other language, and then the speed sucks. Rant off.
Re:copying, assignment, garbage collection (Score:2)
There is indeed overhead involved in a reference, but the hope is that you only have to handle "large" objects by reference. Things like numbers are indeed "atomic" meaning that one use of the number 5, for example, cannot be distinguished usefully from another instance of the number 5. It doesn't make sense to say "change the value of 5 to be 6." Instead, we have a place that can hold numbers, and we change the value in that place to be the number 6 instead of 5. In that sense, a numeric variable is a "reference" to a location in memory, which can contain "values" although we never use those terms in C, as there is no pointer indirection or overhead involved.
Now, the problem comes in when you talk about larger things, like, for instance, a 3-D rendered scene. Say I have a "scene" which contains a bat and a ball. If I "duplicate" the scene, so I have S1 and S2, and I move the ball in S1, does it move in S2? Hmmm. It depends on what you did/meant when you "duplicated" the scene. Is it the "same" scene, meaning you scratch one and the other bleeds? Or is it a new scene, which just happens to look the same? The same philosophical problem arises when you pass parameters to functions. Do you "duplicate" the argument, or not?
The point is that a language implementation pushes bits and bytes around. However, a programmer is managing abstractions, and the handling of those abstractions CANNOT be specified as part of a language definition! It depends on the programmer's intent!
This is another mess that C++ got into when it had to manage assignment and initialization of classes: "copy" constructors for all your classes. But what does it mean to copy? It depends! Not only does it depend on your class, but it also depends on how you want to use it, which can change!
A side effect of this is that passing arguments around can involve a lot of excess "copying," if you insist on "copying" arguments before you pass them. C++ has to do this, because C did, except when you specifically ask for references. Now, if you have garbage collection, all these excess copies, most of which are soon discarded, need to be cleaned up.
I guess this is why the author here worries about garbage collection in his phone number instance. In principle, once you have everyone's phone number, you don't need to allocate any more of them, and unless people go away, or change phone numbers, you don't really need to throw them away. Unless you are in C++, so you have to keep "copying" them onto scraps of memory to send to subroutines, and then discarding the scraps as soon as it returns.
As a side note, mark-and-sweep is usually the worst possible garbage collection algorithm. You have to look at everything, even if most of it isn't garbage anymore. Much better is "generational" garbage collection, which mostly looks at recently allocated stuff. The idea is that if stuff has been held onto for a while already, it probably is still being held onto, while it is very common for things to be allocated, used only a little bit, then discarded. This can be very efficient GC.
The problem with people's perception of GC is that it happens behind the scenes. "malloc()" and "free()" are right there in your face. In fact, they have so few characters in them, "they must be fast." Of course, as soon as you fragment your arena, malloc can get slower, and slower, and slower....In Windows, where programs typically don't live long, you don't see this. But in the world of serious applications, if you want your program to run for weeks or years, this can be a real problem. Garbage collection on references can then be MORE EFFICIENT than manual collection, not only because it doesn't "forget to free()", it can also, when dealing with references, rearrange memory to be more compact, and therefore localized for cache issues.
Of course, I don't think C++ or C# garbage collection can actually do this, because when you move objects, you have to go back and change all the pointers that pointed to it, which in Lisp is easy, because the machine can tell a pointer from an integer, but in C-based languages is hard, because pointers and integers are both just piles of bits. But hey, that's what you get for programming in object-oriented portable assembler.
Re:Ack! Significant whitespace! (Score:2)
Re-indent the whole file
M-<
C-space
M->
C-M-\
Emacs was designed for using with Lisp... now that's a language with crazy matching of brackets. Everytime I close a brace, paren, etc, the cursor shows me where it matches, or outputs the line in the mini-buffer. If I felt like it I could also have it automatically highlight the whole region between brackets. Really, bracket matching is irrelevant.
Personally I think that people who code in C/C++/Java/etc without using any braces for one line blocks are bad programmers. They're just asking for a bug to be introduced at a later date. Hence I don't do, and neither do a lot of the people I've worked with. It's just lazy.
I kind of like the braces (opening brace on a new line please) as it helps space the code vertically and provides an easy way to search for the end of a block (I'll leave coding for a 25 line screen to Linux kernel people and their crazy coding standards - bunch 'o masochists that they are!) I don't know how you would do that if the block was specified by indentation.
Re:Does C# exist yet?? (Score:2)
Go off and download the .NET SDK @ http://msdn.microsoft.com/net/ [microsoft.com]...
It's a mesh, says INTERCAL (Score:2)
Re:Sounds like Delphi (Score:2)
How is it better? (Score:2)
Re:Huh? (GC) (Score:2)
Mind you, this is one of Napster's programmers - the company famous for "taking existing filesharing software and making it work with MP3s" - about on a par with Baby's First JavaScript.
Re:IL is the key... (Score:3)
Of course, Microsoft isn't exactly the only group doing this. As much as I may like the looks of OS X, the development environment is, once again, highly dependent on a number of proprietary, platform-specific libraries and services. Linux and the rest of the UNIX-esque system benefit from the basic POSIX standard, but I think what we're seeing more and more lately is that that's not quite far enough these days. If the UNIXes of the world can't come up with a system that's as brainless to use as Visual Basic, Microsoft will continue to lure developers who can't, don't want to, or don't need to learn the intricacies of OO, and just want to quickly build applications with the benefits of pluggable components.
Re:New languages have potential, but C# doesn't (Score:2)
Let's deal with your points 1 by 1.
1: It's been shown a few times that incremental languages are successful and revolutionary ones aren't. C++ was incremental on C. Eiffel was revolutionary. Which is in greater use. Leave the revolutionary stuff to experimental languages but be clear that ideas extracted from those languages and implemented in a incremental way are ideas that are successful.
2: Schizo Gerbil. Jeeze - get a grip. Does C++ keep all semantics the same as C? Nope. Does Java?
3. Non-open runtime. Really you don't know because the language is not ready for use yet. However Anders has said that the lnguage and all the other MS languages will compile down to run on top of the
4. No real standard. They have said they will submit to ECMA.
Get some balance dude.
Why program in flat text files?? (Score:2)
Re:Totally agree - when will OO die? (Score:2)
Well, I admit C++ is less of an offender than Java, as it isn't prompting you to use OO whether you want to or not.
As for a "lack of OOA and OOD" in my office, I'm sorry, but I've heard this one a thousand times. Typically, I find the worst offenders are the ones with the most books and training under their belt - they are even more likely to employ specious OO methods where they aren't needed. The problem is, most of the OO training and literature still isn't frank enough about the success rate of these methods, and what domains they handle adequately. The party line still seems to treat OO as a silver bullet that we could all use to save ourselves if we weren't so stupid.
Obviously this is anecdotal evidence, but I still haven't heard of one shop who has gone totally OO and is better off for it.
Generic Java (Score:3)
I've used GJ quite a bit, and I'm quite happy with it. Furthermore, there's reason to hope that code written in GJ (the syntax of which is similar to C++ templates) will be compatable with future versions of Java, since Sun is looking into adding genericity to Java, and looking at GJ in particular.
Re:I don't understand this attitude at all. (Score:2)
(Right now I am teaching myself Visual C++, and the hardest thing about it for me is getting used to MS's editor, not the syntax!)
LL
Re:I don't understand this attitude at all. (Score:2)
The parent post was making lots of sense until I read this bit:
Since it does things like treat "=" as comparison in conditionals and assignment in statements, as well as the whole whitespace formatting thing, it totally spoils you for writing in things like C and Perl.I'm sorry, but that is just not true. In Python, '=' is assignment and '==' is comparison, just like in C, C++, etc. What you cannot do is do an assignment and simultaneously treat the rvalue as a boolean in a conditional. In other words, if you mean to do this:
if a== 6:
print 'a was 6'
If you had you accidently typed this:
if a = 6:
print 'a was 6'
Python stops with a 'Syntax Error' exception. If you make the same mistake in C, if would happily overwrite the previous value of a and print a was 6. Lots of fun to debug .. not!
Re:A look at C# (Score:3)
Uhuh... right...
The runtime for
Simon
Re:Ack! Significant whitespace! (Score:2)
That's OK, he's confused, too... (Score:2)
The JVM is not required to do mark-sweep GC. The JVM spec ifically [sun.com] leaves the implementation of storage management unspecified.
This is good because it means that Java can use modern, higher-performance GC strategies like stop-and-copy or generational GC, both of which have been in use in Lisp and Scheme systems since the 1980's. I strongly suspect that C# will have to use mark-sweep or some other non-relocating GC, since you're allowed to go down below it to assembler, exposing pointers that might need relocation.
Do most JVM implementations really still use mark-sweep GC? Despite James Gosling and Guy Steele both being ur-Lisp hackers?
Borland already did C# in C++Builder (Score:3)
Personally, I found C# support of events to be a very exciting new feature
C++Builder has been doing this since day one, with what Borland calls a "closure". You use a new keyword, __closure, to declare a pointer which points to a member function of a specific object instance. Not surprisingly, Borland uses this to drive the entire event system in their GUI framework. It rocks.
Properties are another example, even though they're not as much of a labor-saver as events are.
Again, Borland has been doing this since day one. The keyword __property can be used to declare object members which appear to be simple variables to "outsiders", but do magic when read or set.
Once again, Microsoft fails to innovate, but instead steals from elsewhere.
Java vs. C++ speed (Score:2)
HotSpot is *extremely* good.. probably a fair distance along the way to optimal, given its ability to do things like deep inlining across several layers when the runtime history indicates that path is very hot. The reason Java code still runs slower than C++ is partially due to the HotSpot overhead and partially due to the fact that the way you write code in Java is often much less CPU efficient than if you were to write code with similar intent in C++.
In C/C++, you would parse a file line by line by constantly re-using a single memory buffer for each line of the file that you read (sizing it up if it overflows, of course). In Java, since Strings are immutable (for thread safety), you wind up creating a new String for each line, plus a new String for just about every subelement that you pull out for further processing. This sort of style is mandated by the fact that so much of the Java class library API's demand real live immutable Strings, and you don't have a choice but to create a bazillion of them in many cases.
With HotSpot, creating new objects off of the heap can be very nearly as fast as stack-based allocation of auto variables in C/C++, but it's just not going to ever be as fast as intelligently re-using a memory buffer. There are many things in Java programming that force that kind of trade-off. The benefit is that you get a lot of aid and encouragement to make your code thread-safe and that it is actually possible to guarantee that an object's private state can't be trashed by anything external to it, never no way no how. A completely reasonable trade, in my opinion, especially in light of the massive portability support provided by Java, but it's not The Answer To Everything, of course.
But HotSpot itself is some impressive shit.
Events used to drive Garbage Collection? (Score:3)
Re:Objective-C, NeXTStep, OpenStep, Mac OS X, and (Score:2)
As for JITC, prospective apostles of this technology should try it out with real programs and do extensive benchmarking - indications are that HotSpot is still two to four times slower than C++. Most claims I have seen for JIT code is based on in-memory operations with very little IO and/or user-interaction.
Re:java (Score:2)
Okay, obligatory plug over... =)
Re:A look at C# (Score:2)
Which of the two of us worked on the
Simon
Re:Copy By Value vs Copy By Reference (Score:2)
the output would be "21" and "10" if it is passed by reference, not "10.5" (integer division)
C and C++ pass by value by default, and Java passes primitive types (int, double, char) by value, but Objects by reference
Back to C... (Score:4)
And it's a shame to not see good template (genericity?) support in C#. Or any language, for that matter.
I think choosing a good type system is where a lot of languages fall flat, and I'm not a big fan of the huge C++/Java Object/Type/Library approach, although I haven't seen a truly good solution to this problem yet. C, Pascal, Java, Perl, Scheme... They all have different ideas and solutions, and I haven't seen a "Right Way" yet. Although I think Scheme has the right idea with its first class data types, it still all needs some work.
---
pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
Is it just me, or does this like VB++ (Score:2)
The similarities to Visual Basic are eerie. It sounds a lot like they looked at the way people were using VB and incorporated those ideas plus improvements people have been asking for into a package that is more 'programmerly'. Anyone want to place any bets that this is the heir apparent to VB?
Re:power languages (Score:2)
I would argue that efficiency and performance are more likely arguments for C.
(Eiffel, Haskell, ML come to mind)
Haskell in particular is cool, but I think you do these languages a disserive by mentioning them in the same post as Java - while they all require a new way of thinking, none of them has nearly as much mumbo-jumbo associated with them as Java. In fact, I see the simplicity of Haskell as its key advantage.
Generic Programming (Score:2)
CORBA, not JNI (Score:2)
I've always been partial to CORBA as a solution to using native components. It doesn't matter where and how your components are implemented. We were doing stuff using the TAO ORB for our C++ servers (and Visibroker in Java). For what we were doing, CORBA is so fast that it wasn't really noticeable if my CORBA servers were at the other end of a dial up connection!
I don't understand this attitude at all. (Score:2)
I think this is a common poor analysis that reads the situation backwards. In reality, code is almost always indented "correctly" according to what the programmer intended, errors arising from incorrect indentation are generally due to the programmer failing to insert the braces in the correct positions, and thus don't exist in a language like Python. So "to generate incorrect code" due to a formatting error is simply an impossibility, unlike in C.
Whitespace formatting is instantly visible, that's why people indent their code even when it's insignificant. Braces, OTOH, are very hard to keep track of. When the whitespace isn't used by the compiler, that means you're using one technique to give this information to a human reader (including yourself), and another to give the information to the compiler: a sure recipe for errors.
How often have you seen C bugs due to missed semicolons and braces? Part of this
if(x==y)
doxeqy();
Then you realize you had to do more for that condition:
if(x==y)
doxeqy();
doyeqx();
Whoops! It looks okay, but of course it's not.
Or how about this classic mistake?
if(x==y)
if(t==u){
doxyandtu();
doxyandtu2();
doxyandtu3();
}
else{
donotxy();
}
Hmm... looks okay, compiles fine...
Sure, they're goofy mistakes, people make them all the time! Human minds are terrible at diligence tasks (when they have to remember to do something and nobody is reminding them to do it). Of course you're going to forget to put in braces sometimes! Why not design the language so your first impression is always right?
In practice nobody ever fobs up a Python script with something like:
if a=b:
if c=d: do_aeqb_and_ceqd1()
do_aeqb_and_ceqd2()
It's immediately apparent from the indentation that the second function call is in the "if a=b" block, not the "if c=d" block.
However, I do think that pushing Python as a teaching language is a terrible mistake. Since it does things like treat "=" as comparison in conditionals and assignment in statements, as well as the whole whitespace formatting thing, it totally spoils you for writing in things like C and Perl. Even experienced C programmers often forget things like semicolons and mix up comparison and assignment, people moving from Python to C just have a terrible time.
---
Despite rumors to the contrary, I am not a turnip.
This level of language... (Score:5)
If I want to use a medium-level language because I want absolute control and optimized speed, I'll use C. I don't want an "almost-medium-level-but-a-little-higher-than-tha
Granted, there's a need for these "weird-level" languages, and some people love them - but I think that C++ and Java nicely fill the niche. So, my first thought, which is even more valid, I think, in the face of this review, is "Why does Microsoft feel almost obligated to make an M$ version of *everything*??"
For GUIs and money managers and anything else aimed at "my mom", Microsoft is guaranteed to reign supreme, because "my mom" doesn't really care about performance issues or security or any of that. But my hunch is that, in light of some of the bugs and general ickiness covered in this review, few people are going to want to switch over to C#. I mean, what would be the advantage?? If you already write C++ and/or Java, why would you want to start writing stuff in C#? I just don't understand.
What's the point? (Score:3)
C#: Answer to the DOJ? (Score:5)
It seems to me that creating a new 'standard' language, which neverltheless relies heavily on COM and
Let's say that C# is simply a better language to program for Windows than C++ is. Let's also suppose the hypothetical case where new Windows functionality comes along in future Win versions, and that this functionality is more easily taken advantage of using this new C# language. This gives developers the incentive to code new Windows products in C#. Note that C# has substantially different enough structures that porting from C# to C++ would not be trivial.
Now suppose that Linux (or another OS) starts gaining prominence in the next 2-8 years. As with any new OS, its main barrier to entry is lack of software. (The only reason Linux is viable is because of all the UNIX software it inherits.) In this time, Microsoft's pushing of C# has created a new software base for Windows that is relatively locked into place, unable to be ported to other platforms without significant effort.
Now I'm not saying this is evil. I'm not saying it's a conspiracy. Often languages built for specific environments are superior tools in those environments specifically because they're specialized.
It's just something to be aware of.
Kevin Fox
Re:java (Score:2)
there have been a few instances where I was faced with a programming problem where the solution depended on objects *not* being automatically destroyed, as they were supposed to exist separate from the main object hierarchy and would take care of their own destruction when the time was right.
WTF? Objects will only be collected if they're unreachable, and if they're unreachable, what do you want them hanging around for?
Also, the whole thing about C#'s "structs" seems a bit dubious to me:
One new feature that I mentioned already was that of copy-by-value objects. This seemingly small improvement is a potentially huge performance saver
It would only be a performance saver if these objects are really small, because you'll be copying these objects around all over the place). I also have to wonder how C# deals with the object slicing issue. Object slicing is when you pass a subclass instance to something that takes the base class using pass-by-value, and you implicitly "slice" thje object down to the base class. It doesn't happen in Java (or Modula-3, or Objective-C, or any other language that always passes objects by reference). It does happen in C++ if you're not careful though, and it's really hideous.
Re:New languages have potential, but C# doesn't (Score:2)
Right now there are two things that I would like to see in a new OO language:
- templates (not the crappy C++ version)
- aspects (as in AspectJ)
Both make it significantly easier to model certain problems. Especially aspects are really cool. Unfortunately C# provides neither which dooms it as obsolete even before it is finished.
I don't think C# is a bad thing, I just think it is not a very big step forward (to small to be interesting).
Re:For all the bashing C# gets here... (Score:2)
JIT != compiler (Score:2)
Providing a self-contained executable, or a set of compiled files (a'la DLL and EXE) is much easier than making sure that a customer has a servicable JRE on his/her box. As it stands, dealing with distribution of Java programs is just as bad, if not worse, than handling VB programs. With VB all I had to worry about was providing the correct VBRUNxxx.dll, but with the recent (relatively, you must admit) switch to Java2 and the more recent addition of HotSpot in JDK1.3, things got complex.
Distributing the complete JRE each time, Just In Case, isn't going to cut it. Yes, the support stuff is in JARs, and these can all be conditionally installed - but that's a bit more to worry about than with an old fashioned EXE. Also, the fact that invoking a Java program involves not only starting the interpreter/JIT, but also setting a CLASSPATH, makes things icky - at least until Java makes adequate inroads that a CLASSPATH can be presumed. Sure, setting up a batch to do this is fine, but we're just compounding assumptions at that point. A binary executable is a lot more workable when doling out software to non-programmers.
Re:java (Score:2)
Not all Java objects have to have vtables (Score:2)
After all, Java does support the final attribute on both methods and classes, and JVM's like HotSpot are perfectly capable of doing extremely aggressive inlining as necessary.
Since Java is a late binding language, you do have to have some intelligence in the JVM to optimize the method resolution when new classes are loaded and compiled, so you still have to pay the link penalty for both virtual and non-virtual Java methods at the time that a class is loaded, but Sun's JVM on Solaris, Windows, and Linux will do the code rewriting during execution to translate final method calls to direct calls without any vtable. I believe it may even be able to optimize virtual methods to direct calls in circumstances where the execution history forces the object pointer to be of a specific type, as in the case where you have a line of code that creates an object of a given type and the next line you call a method on it.. since HotSpot could reasonably tell that at that point in the code that object reference will always be of the specific subtype that was just created, it would be able to just do a direct jump to the method.
I'm speculating here somewhat.. HotSpot may not be quite that smart, but Sun's comments strongly suggest that it does do that sort of path analysis for heavily used code segments.
Re:Totally agree - when will OO die? (Score:2)
Well, I work in one of those shops that has gone totally OO and is better for it. We are a financial institution and our code involves rigorous manipulation of financial data. This system was implemented in C and C++. We hired a real OO architect, converted to java, completed team designs and the result is a much better system. Since then, I have worked on other projects that utilize OOA/OOD and implement in Java. The result is faster implementation, fewer bugs, and a better translation of the business logic into code.
I was hoping you would provide some evidence for your bias against OO. What were the problems (not anecdotes) that led to your belief?
As far as your comment on OO training and relevance to the real world, I agree. Much of our success is derived from the combination of well educated and experienced engineers (I am one of the most junior engineers with 5+ years as software engineer), an experienced chief architect (who has implemented dozens of OO systems), and old fashioned elbow grease! Let me recommend one OOA/OOD book that actually has real world problems and life cycles. Applying UML and Patterns by Larman. Check it out, you may find it more useful than much of the academic OO drivel on the market.
Finally, I am not one of those folks who thinks OO is a silver bullet. I happen to model in the Relational Database arena often to know that OO and RDBMS are not a clean mix. I was as skeptical as you - until we designed and implemented a system with so few problems that it was shocking. These experiences taught me that OO is meritable and has too many positive ideas to throw out with the bath water.
Later.
--
IL is the key... (Score:3)
Why is IL the key? Consider:
They are going to submit C# the language as a standard - but I don't think that includes IL. That means that even if you make a C# compiler based on the standard, they could change how IL is structured to shut you down.
They have stated IL will be compiled to native code in one pass. That can happen before it's deployed, or on the target platform. But by doing that, they loose the possibiliy of dynamic optmization (one of the things that makes HotSpot fast, and better than just a straight JIT). By allowing the compilation to happen before deployment, you also risk a bad choice for target platforms and possibly reduced performance of distributed components.
It effects all other languages. Using Visual Studio, pretty much all languages will compile into IL. That means the workings of IL affect your code to some degree, really regardless of language.
C# is an interesting language, and I like some of the features - but for all that, would it be impossible to compile C# to Java bytecode? I don't know the answer to that myself for certain, but really the development and capabilites of IL as a platform are really more interesting to watch than whatever language is on top.
Another interesting question to consider - C# allows you to have native (unprotected) code blocks. How does that work in relation to IL? Does the code get bundled with the IL, to be compiled when the IL is compiled? Or are the native parts compiled to native code when the other code is compiled to IL, and transported as a mix of IL and native code? The answer has some implications for optmization of native code blocks.
COM and C# (Score:3)
Which is probably how most of this functionality (encapulation, events and call-backs) will be implemented. I'm getting the sense this is going to turn out to be something of a quazi-language, which was in the end what Microsoft's Java implementation became (Just try and do something meaningful in it without invoking a COM object.)
In the end, C# really does not seem to offer anything meaningful that VB does (or will) not, and for the same reasons will not be any less portable.
Re:Back to C... (Score:3)
Re:C#: Answer to the DOJ? (Score:3)
Java may not be the ultra-portable platform it originally claimed to be, but at least companies who develop with it are not signing their eternal soul (and support contracts) away to a single vendor. If you start down the road of .NET, you are now committing not just your desktop applications and documents, but all your business logic and data, to the benevolent guidance of Microsoft.
The silver lining to all this is the fact that there will be those business analyst-types who will realize the same thing, and say so.
What _I_ Like about C#.. (Score:4)
C# is highly typed, so you don't spend hours looking through code trying to find a type mismatch.
It is early binding instead of late binding, meaning it is quicker! With Java (late binding), a file search and enumeration of 8000 files on our servers here at work took an hour and a half, and 50000 files with a C(early binding) app took 4 minutes, so C# takes the best of both. Also, because it is early binding, you don't have to worry about references to non-existant objects, when you are using DLL's for instance. C# automatically loads and reviews the routines contained in a DLL automatically, before compile, so a reference to myDLL. will bringup a popup list of the routines availible in that DLL.
Very cool stuff! It will be interesting to see if it takes over as the new, trendy programming language of 2000/2001, as Java has been for a few years.
Re:Totally agree - when will OO die? (Score:3)
now, where it got interesting was when we actually examined the software engineered by novices. the o.o. paradigm forced more thought to be placed into the structure of the application's design, thus typically resulted in easier to maintain software.
that software written in c, cobol, basic, pascal, you name it, that was easy to maintain was only that written by the really experienced. the novices in the crowd made our lives very painful. my experience seems to show me that o.o. languages are less lenient toward rush-jobs at design-time.
just my 0.02
Peter
Re:C#: Answer to the DOJ? (Score:4)
Which gets to a more theoretical problem. The purpose of COM, and models like it, is fairly specific. It is interoperability between seperate running programs, either locally or across a network. But who says I want to share every single object in my program with the outside world? What's the point in having a string class that could potentially be shared between programs if I've got no need to share it between programs?
It seems to me that people are going to find that to get C# programs to perform acceptably, they are going to have to design with big, heavy kitchen sink classes. And that worries me because that sort of design is, in my opinion, one of the biggest downfalls of most Windows software. (Especially Microsoft APIs.) I'm sick of having to instiatiate five classes and code a hundred lines of code just to find out if the damn CD player is in the "playing" state.
It seems to me that this is a case of having a hammer (COM) and seeing every problem as a nail.
If I were designing C#, I wouldn't make every object a COM object. Instead, I'd have some kind of "COMmable" attribute that could make some objects COM objects with little fuss. Put the control in the hands of the programmer.
For good "template" support: try ML (Score:5)
fun length nil = 0
| length (h::t) = 1 + (length t)
which has type: 'a list -> int
(meaning the function takes a list of anything, and returns an integer).
Through the mechanism called "functors", you can specialize a generic structure (say "sets", or "mappings", or "arrays") with some types and operations to create a new type. Signatures let you make these types truly abstract (paired with type safety, a very powerful notion).
All of this is type safe (with proofs). Most of it is accomplished statically too, so there's little run-time overhead. It is indeed scheme with "some work".
Attributes (Score:5)
Why should you care?
Well, attributes are really useful in cases where you want to pass some information about the class somewhere else but you don't want to make it part of the code.
With attributes, for example, you can specify how a class should be persisted to XML.
[XmlRoot("Order", Namespace="urn:acme.b2b-schema.v1")]
public class PurchaseOrder
{
[XmlElement("shipTo")] public Address ShipTo;
[XmlElement("billTo")] public Address BillTo;
[XmlElement("comment")] public string Comment;
[XmlElement("items")] public Item[] Items;
[XmlAttribute("date")] public DateTime OrderDate;
}
At runtime, the XML serializer looks for those attributes on the object it's trying to serialize, and uses them to control how it works.
You can also use attributes to communicate marshalling information, security information, etc.
The nice thing about attributes is that it's a common mechanism, and it's extensible, so you don't have to invent some new mechanism to do something similar.
Or, to look at it another way, attributes are just a general mechanism for getting information into the metadata.
Re:What's the point? (Score:4)
It provides a Java-proof firewall around Microsoft's market share.
--
Re:java (Score:4)
So, in this case, the analysis of time-to-collect is misplaced unless there is reference to a specfic VM.
I recommend reading the VM spec. I've read it 5 or 6 times myself.
Re:Totally agree - when will OO die? (Score:3)
- projects which are one-shot, or which there is no interest in maintaining. For these, why do design at all? if it works enough for you to use, and you don't have to worry about adding features or fixing bugs other people have, just finish the damn thing and go home. Perl is great for this.
- projects in which the customer has no clue what they want. I've been using design principles for a while, my current customer has no earthly clue what they want, but definately hope that I can completely change everything around at the last minute when I show them a working program based on my own ideas of how it should be designed. They literally won't pay me to do design specs. The code is rapidly approaching 100k lines, and I have no idea how the massive 1 ton block of spaghetti will be maintained in the future. There are parts of an old program with 25-page-long routines pasted in, and I have been told if I edit those routines I will be fired on the spot.
- single-developer projects (or even two/three developer projects) don't really need OOD and documentation as much as team or multiple-team projects. It can still hit you hard right around 40-60k lines (depending just how good you are), when you realize that you don't remember how the whole thing works anymore. After about 80k lines, you will spend more time communicating how things work and bringing new developers up to speed than you will coding - basically you aren't done with the program and already you are maintaining it.
If you are writing small,simple pieces of software, OOD is a joke. The reason isn't that OOD doesn't scale down, it is because the problem is so simple you designed it in your head. The end product is so simple that you can look at it for five minutes and understand what is going on. But as soon as you can't think about all the issues someone will face in the project by looking at the problem, as soon as you can't be sure what steps to take to get from A to B, you need OOD.
Re: Pronunciation: whatever... (Score:4)
Anytime you come up with a name where you have to explain the pronunciation every time you use it, you know that its a real lamer. Its a dead give-away that you worked too hard, stayed up too late, and got too cute.
Of course, there _are_ lots of ways to say C#. I always think 'hash' everytime I see '#', and if you use the hard sound for the letter C, you get K-hash, or simple Cash. Which I think fits, considering the orifice from which it issued.
Cheers!
Re:What _I_ Like about C#.. (Score:4)
> looking through code trying to find a type
> mismatch.
Client.java:43: Incompatible type for declaration. Can't convert Context to Community.
Community context = (Context)contextE.nextElement();
yeah, that took a couple hours to track down. phew!
power languages (Score:3)
I think it's an unfortunate misconception that C is the ultimate "power language" because it gives you so much "control". There are lots of advanced languages around (Eiffel, Haskell, ML come to mind) which are more powerful than C precisely because of their restrictions. (You can reason about your programs more precisely since you know certain behavior is impossible). Java has some of this (safety, at least) and I think that makes it better for most applications than C++.
I'll agree with you here, though: aside from a few cosmetic improvements, C# is not really any better than Java.
don't get your hopes up on performance (Score:4)
Java has an enormous number of man-years poured into its design, library, bug fixes, and various implementations. Issues like safety, sandboxing, security, and reflection aren't even addressed by C#. A complete set of cross-platform libraries is also not a goal of C#, while Java actually delivers, and delivers pretty well. And people are working hard at adding genericity and features for high performance computing (JavaGrande) to Java.
If Windows programmers switch in droves from C++ to C# that would be great: as far as it's defined, C# is almost indistiguishable from Java, and it would elevate the quality of software on Windows platforms. But, at this point, C# looks like a dud to me: it's late, it's non-standard, has no user community, and it doesn't even promise to offer any compelling advantages over Java.
Early/Late binding (Score:3)
There are optimizations that enable late-binding to be more efficient (such as a vtable). Java implementations should be able to determine (via the final attribute) that a function can be early-bound instead of late-bound. There are also C++ compilers that can eliminate virtual function calls by doing link-time program analysis.
Re:What _I_ Like about C#.. (Score:3)
Then there's the question of Java's Hotspot, which can remove the late binding overhead in Java at runtime once its optimised a piece of code. It can even de-optimise and switch back to late binding then re-optimise if you starty doing dynamic class loading. This is a big and complex subject, and I'm not sure I could explain it here even if I had the time...
I submit the most likely reason for the performance of your Java program is that it wasn't as well written as the C++ one. Binding is a tiny tiny part of the difference between the 2 languages. There any many other factors which are more likely to account for the difference.
As for the DLL thing - that sounds great - except whe nyou come to deploy your application the user doesn't have the same version of the myDLL that you did on you super dev box. Hence the user gets a reference to a non existant object. Early binding really has nothing to do with this. It sounds like its being used to implement a nice feature in the IDE, not a solution to the age old library versioning problem that people are discussing in the Lets Make Unix not suck thread
Lord Pixel - The cat who walks through walls
Re:This level of language... (Score:3)
(1) The ability to do source level debugging is nearly completely lost in translation from C++ to C.
(2) It is much easier to optimize code when you are going from a symbolic representation to RTL or some other near-machine-level representation than it is when going from symbolic code to some other symbolic code with uncertain semantics. While the translation is *possible* the results are not elegant, efficient, or readable.
A look at C# (Score:5)
I recently went to a brief presentation on C#, done by some Comp Sci folks just back from the MS developer conference.
A few points I recall:
Someone asked why we need another language, especially one so close to Java. The presenter(s) explained that MS basically wanted to offer a VM based Java-like language, but was unable to add their own extensions to Java fit in with their new strategy (remember the lawsuit from Sun?). They remarked that perhaps Sun made a mistake in their desire to keep MS from making non-standard alterations to the MS implementation of the Java VM. MS, as usual, just went ahead and created their own new standard. Now we have another language to pull developers away from Java.
Gripes and rebuttals (Score:3)
my_int bar(my_int x) {
x+=1;
return x;
}
my_int foo() {
my_int y = 2;
bar(y);
return bar(y);
}
foo returns different results depending on whether or not my_int is a struct or a class. You cannot tell if modifying the object will modify the original or not without looking for the original class definition which can be buried pretty much anywhere in c#, because it has abandoned the rigid formalism that Java used of one public class per file and the name of the file matching the class.
Mind you, Microsoft will probably sell a nice Visual Studio plugin which looks up the source definition and shows you if its a class or struct, and which allows you to hyperlink to the definition (witness MSVC and VB), but I'm examining the language, not Microsoft's tools.
I disagree with the author of this article about the presence of Attributes. Attributes (as ugly as they are) create an avenue to extend the metainformation provided by the language. Since the attributes reside at the class rather than instance level, the glut of their presence is not intolerable. Their presence is necessary to fully specify COM parameters, they can act as a reflection tie to documentation, provide editor bindings to source code, etc.. in one place, rather than java's comparatively hackish javadoc approach of differentiating /* and /**. Its new, its different, but I can accept it and already see uses for it.
However, I do not see the need for Events to be a language level construct. With the introduction of generics/templates they could be implemented as a generator/listener template with a common superclass and subscribers tracked in a static list per event template used. No extra nomenclature need be added to the language to describe what is just another pattern afterall. In the absence of generics I suppose they did the best they could, but its another case of pandering to their present programming paradigm (pardon the alliteration).
I do miss the presence of java's inner classes and either c++'s templates or some other form of constrained generics (can anyone say Eiffel?).
In fact the lack of any form of generic will probably keep me from using the language as I am a pattern junky, and templates/generics are key to avoiding a lot of cut&paste code when using similar patterns heavily.
Apple releases a new language called C#++ (Score:4)
C#++ is desiged to fill that middle ground.
CaptChalupa, a MicroSlash programmer, said, "I love C++, and always programmed in C++. But the lure of the C#, which my colleagues have raved all over, is tugging me. The reason I don't want to abandon C++ is because I still like to keep the flexibility of collecting my own garbage. C#++ is the answer that may just finally pull me away from C++."
Meanwhile, in another development, Microsoft Applications Inc. has announced the development of a new language called "C##".....
I'm so excited, I can hardly contain myself (Score:4)
As a Microsoft-whore, the ability to develop with the new tools of VS7 (which, BTW, features C#, VC++, Managed VC++, and VB all running in the same IDE with concurrent, multilanguage debugging... baby!) has taken over. I am working on a project that requires two WinCE portions and a data management system. The things I've read have made me make the conscious decision to do no code development on the data management portion until VS7 is in my hand and on my machine.
This month's Visual C++ Developers Journal has a cover to cover exposé on VS7. This is a link to the online article.
http://www.devx.com/upload/free/features/vcdj/200
"Blue Elf has destroyed the food!"
Re:Not see sharp (Score:3)
Re:C# (Score:3)
I wonder who would win that court battle...
Ack! Significant whitespace! (Score:3)
I shudder when I think of programming with significant whitespace. It's what has kept me from picking up Python in earnest--at least until I write a conversion program that turns braces into the whitespace that python likes (in Perl :-).
To think that C would be made *better* by the addition of significant whitespace gives me the chills.
SteveRe:Ack! Significant whitespace! (Score:4)
It would be okay for the compiler to generate a warning for incorrectly-indented code, but to generate incorrect code instead is simply inexcusable.
Depends on your definition of correctness. ;) If you wrote:
On a quick visual inspection of the code, I'd assume that doThingTwo() was outside of the if clause, and if you wrote:
I'd definately assume doThingTwo() was inside the if until I looked closer to notice the braces were missing. In this case, the "correct" code would be very suprising.
In python, you don't have these sorts of suprises, your block structure is immediately obvious from indentation, and if it looks wrong, it is wrong. I'd call that correct code generation...
I'll admit I was a little put off by this feature of python at first, but as soon as I started working with it for awhile, it just seemed natural. Now in my Java programming I always get annoyed when I have to spend time balancing my braces!
Also, I used to do alot of work in perl with a guy who never bothered to indent his code consistently (after all, its the braces that define the block structure) and at least once a week, he would call me over to look at some bug that became obvious as soon as I reformatted the code. And applying the pragmatic programming principle of DRY (Don't Repeat Yourself), what is the point of having the same semantic information encoded in the formatting (where it is visible to the programmer) and brace structure (where it is visible to the compiler). If you've got the same information in two places, eventually they'll get out of sync, and you'll lose...
Re:Back to C... (Score:3)
That problem is addressed through the use of protocols. Some are formal, like the reference counting protocol for instance (implementing them is known as "adopting" that protocol), informal protocols can be defined at will. Note that this also gives you basically all the design capabilities of C++'s multiple inheritance with none of the associated problems.
"Protocols free method declarations from dependency on the class hierarchy, so they can be used in ways that classes and categories cannot. Protocols list methods that are (or may be) implemented somewhere, but the identity of the class that implements them is not of interest. What is of interest is whether or not a particular class conforms to the protocol--whether it has implementations of the methods the protocol declares. Thus objects can be grouped into types not just on the basis of similarities due to the fact that they inherit from the same class, but also on the basis of their similarity in conforming to the same protocol. Classes in unrelated branches of the inheritance hierarchy might be typed alike because they conform to the same protocol."
-- Object-Oriented Programming and the Objective-C Language [apple.com], p.99.
What I like about Scheme is that you can query the datatype to see what it is,
In Obj-C you can ask an object what it is, if what it is is a kind of some of other thing, if it responds to a given message (note that through the use of categories, this capability may be added at runtime), conforms to a given protocol, yadayadayada.
On the other side of the fence, Java has a type for everything, and is correspondingly complex, too.
Obj-C is, well, C. You add just as much or little complexity as you wish.