I think some people try to compare apples & oranges with C++ vs [whatever]. C++ as a language is fairly portable, but still dependent upon the underlying machine architecture. Not all systems will support STL as well, so it's often useful for some of the more expansive libraries (such as QT) to include their own stuff.
Also I think of C++ only for desktop application use - and it's used a lot more than just there. There are environments where some of the things people complain about here are suddenly very, very useful and help keep clean, maintained code.
I am not a "Windows-centric developer" nor am I insensitive to portability.
Originally Posted by RealNC
My point was not that types that specify things such as signedness or the exact bit width desired are bad. Quite the contrary; being able to specify these things is very useful and essential.
What is NOT essential is having a hundred different ways of specifying it! Heaven forbid you try to use code from more than one external library within the same program, and before you know it, you've got dozens of different ways to do the same exact thing. To the extent that they're equivalent, that's nice; but as soon as you move away from integers and start talking about more advanced data types like vectors/lists/tables/whatever, you immediately run into incompatibilities and wasted resources (because you often have to do a deep copy of the elements from one type to another). You also have to deal with different semantics all the time: some libraries expect the caller to take "ownership" of objects they get in out pointers or returned from a function, which means you have to remember to free or delete them. Other libraries just let you borrow their object, but if you free it, the library itself will try to free or access it later and cause a segfault. The fact that these two behaviors are seemingly randomly interspersed in libraries and that code "doing the wrong thing" compiles is braindead stupid. At least in Vala, which eventually compiles to C, it implements things like "unowned" keyword and "out" keyword so that if you make the wrong assumptions about memory management or in/outness of pointers, you get a compile-time error. That's what I like to see. Ideal would be not even having to worry about that stuff at all, but I realize that you'd then be stuck with a VM that has relatively lower performance ceiling than C++, so I'm willing to accept having to deal with these types of things, memory management, ownership, out pointers, etc.... it's just that there's no reason why the compiler can't let you know when you get it wrong.
Detecting broken code and producing a compiler error is something that every sane compiled language designer should strive to achieve, regardless of any arguments about performance or anything else. It's absolutely indefensible to argue "no, I want my compiler to accept code that is going to crash". You create language constructs so that doing the wrong thing prevents a binary from being generated. Period. This is bar none the best feature of Java, and something sorely missing from C++ in many cases. And I don't see how having native code that lets you do nasty things is a reason why a C++ compiler couldn't detect things like object ownership semantics. You may have to introduce a new keyword or so to make it happen, but that's called improvement. C++ has basically stagnated and I'm tired of it letting you shoot yourself in the foot with a machine gun when all it needs is a safety lock.
Imagine if in Java or C#, every single developer who wrote a Java library had their own utility classes implementing the Collection interfaces? It seems almost unfathomably stupid from the point of view of a Java developer, who takes it for granted that the J2SE classes are acceptable. But for some reason, C++ developers think "OK, I'm writing a library... time to redefine the type system!"
To me, the worst part of it is the learning curve and managing the complexity of it... for "elite" programmers who've been working with the same dozen type systems for a decade and know all of them by heart, it's probably not too bad, because they know which types are assignable to which, and so on. But it just introduces a massive and unnecessary learning curve, forcing you to look up docs and to ask yourself "if I assign X type to Y type, will I lose information?" and if the answer is yes, then you have to figure out the best way to handle it, and whether a cast works or converting both to a void pointer and copying bytes or.... let's just say that it's the worst possible situation 9 out of 10 times, and you consider yourself extremely lucky when you can just assign types from one library to another or pass them into a library function.
You also probably won't get a lot of pushback about these things from Windows programmers -- if nothing else, folks who code closely against the Win32 API (and only the Win32 API) have it easier than open source programmers, because all Win32 API code uses the same standard set of typedefs and the same programming paradigms. I've no love for Windows, but at least when you're programming native code for Windows, you don't have to re-learn the basics every time you want to do something. Study an API for 5 or 10 seconds and you already know what to do with it, and even the parameter names in MSDN are syntactically standardized enough that looking at the detailed docs is rarely necessary. This is a case where open source fragmentation has killed us, because nobody has agreed on a standard type system, so once the number of #includes in your files exceeds 3 or 4, you're in typedef hell.
Last edited by allquixotic; 06-16-2012 at 05:35 PM.
Agreed. D is a great language that hasn't got the following it deserves. I'm a Python man at heart, but if I have to go low-level, D is where I want to go.
Originally Posted by jayrulez
If there's a whole heap of different semantics when using libraries...that's not the fault of C++. That's the fault of the people making them, or the people trying to mesh them together. C++ won't hold your hand and won't try to force "proper programming principles" onto you - and this is on purpose. I recommend reading some of Stroustrup's FAQ for the reasons on why C++ is the way it is.
Just for said:
- A language with a good productivity
- Good performance same as C++ or highter
- Easier to use
- All code in one file (no .h)
- Standard D Library (Phobos) is much better than STL
- many feature: metaprogrammig, CFTE ...
- easy to write a parallel program
- No virtual machine as C# or Java
- designed for modern programming
And much more ...
The name of this language is D, try it.
Fedora provide ldc Compiler who use LLVM
Last edited by bioinfornatics; 06-16-2012 at 05:49 PM.
The only real compiled languages right now for open source projects is C and C++. Easy to use by people who download and build stuff. If you start using Pascal, D and whatever else, many people won't bother. They expect "./configure && make install" or "cmake && make install".
Originally Posted by bioinfornatics
Let's face it: C and C++ *are* the standard in open Unix systems. And it doesn't look like this is about to change. IMO other languages are more important for OSes where source code is not the primary software distribution method.
D programming language - It fixes the outstanding issues of C++, adds safty and many innovative features (contracts, slices, scopes, templates, mixins, CTFE, UFCS, etc...), is easy to use (GC *optional*) with a familiar and clean syntax, and without sacrificing native power or efficiency.
Nimrod programming language - also a very innovative and unique language with native power, great meta programming, and an awesome GC. It's as expressive as Lisp, as efficient as C, and the syntax is something similar to Python.
I think C# is more productivity and fun than C++.
Also, Microsoft Visual Studio together with the Resharper extension is awesome.
I hope Linux gets something like this too...
forced garbage collection, unstable (d2 came out very fast and breaks compat with d1). I'd say 'd is more like modern version algol in trying to do too much. I'm not very interested and c++11 just about removes most motivation for using D.
Originally Posted by Lattyware
I'd rather build on something more innovative like clay, although I have yet to fully test drive it.
Last edited by bnolsen; 06-16-2012 at 08:17 PM.
The fact that every single library redefines fixed-width integer types indicates a fundamental issue in the design of C++. Spinning this as a "feature" flies into the face of qint32, int32_t, boost::int32, GLint, DWORD and the dozens of other custom types that are written to work around this issue.
Originally Posted by mirv
The "performance" argument for is laughable at best. Using a native int type will not magically make your code faster when you recompile on a 16bit architecture - it will merely break your code, because a++ will suddenly overflow at 2^16 instead of 2^32 (and if you check for overflow then your code will be slower than if you used the correct fixed-width time from the beginning).
In short, bollocks. Every well-written portable library (re)defines the same basic fixed-width integer types and uses them exclusively instead of "int", "short" and the like. C99 realized this problem and introduced <stdint.h>. It's high time C++ did the same.
(As an interesting aside, the CLR handles fixed vs native integers in an even better way: integers are all fixed-width by default (int8, int16, int32, int64) and theres is a special "native int" type that conforms to the bitness of the underlying platform (and has properties like Size that you can query). Best of both worlds, since you get correct code by default and you can explicitly drop down to native int when necessary (and it almost never is).)