Page 2 of 6 FirstFirst 1234 ... LastLast
Results 11 to 20 of 55

Thread: Approved: C++0x Will Be An International Standard

  1. #11
    Join Date
    Sep 2010
    Posts
    453

    Default

    Quote Originally Posted by mirv View Post
    I'm not convinced binary literals are great - I'd use hex in all cases even if they were there. It's actually more readable to have things in hex. As for pointer syntax...well, that can be confusing (nullptr is a bit late, but helpful never the less), particularly with function pointers, but I don't think it's that bad.

    I'm quite enjoying the threading with C++0x (or C++11, or whatever) - glad to see that around. Combined with variadic templates, it's quite useful.
    Do you also use hex when each bit is a flag for something in a word (16 bit) and let's go crazy and say dword, qword(32 and 64 bit)?

    Seriously using hex is cumbersome in some situations.
    Other languages starting to add binary literals.

    The problem with pointer syntax is that from pointer to it's value and from value to it's pointer it's the same sign in some situations. I'm referring to the use of the '*' and the '&' sign and the pointer is not a type way of implementation.
    You have to remember everything. Not a big deal if it's a small example exercise. But in a bit project this is how errors easily slip in.
    Last edited by plonoma; 08-15-2011 at 11:06 AM.

  2. #12
    Join Date
    Oct 2007
    Posts
    912

    Default

    Quote Originally Posted by plonoma View Post
    Do you also use hex when each bit is a flag for something in a word (16 bit) and let's go crazy and say dword, qword(32 and 64 bit)?

    Seriously using hex is cumbersome in some situations.
    Other languages starting to add binary literals.

    The problem with pointer syntax is that from pointer to it's value and from value to it's pointer it's the same sign in some situations. I'm referring to the use of the '*' and the '&' sign and the pointer is not a type way of implementation.
    You have to remember everything. Not a big deal if it's a small example exercise. But in a bit project this is how errors easily slip in.
    Yes, I do. Typically of course I'll define a macro and use &, |, ^ for bit manipulation. I do use 8bit words, 16bit words, 32bit words, and have dealt with the odd 64bit word in such a manner. Particularly in the latter cases, binary literals would be just plain bad - too great a risk of putting the bits in the wrong place. Far easier with hex.

  3. #13
    Join Date
    Jul 2009
    Posts
    221

    Default

    You don't use bitset then? But truth be told bit-manipulation is most often used in performance-critical code and
    Code:
    bitset<2> bits;
    ...
    if( bits[1] ){ ... }
    isn't quite as fast as 'if( bits & FLAG ){...}'.

    Btw, why isn't the new release C++B?

  4. #14
    Join Date
    Jun 2011
    Location
    Barcelona
    Posts
    74

    Default

    Quote Originally Posted by elanthis View Post
    D is massively larger. I just need to point that out. Complaining about the size of C++ and then recommending D as an alternative makes as much sense as complaining about the fuel economy of old car and recommending a Hummer to replace it.
    I didn't meant the implementation, but the number of odd keywords and grammar.

    Quote Originally Posted by elanthis View Post
    D was a nice attempt at a better C++. The best, even. It just screwed up the way that every last single other attempt at the same has screwed up: it failed to actually be a (very near to) pure superset of C. C++ has a very specific set of properties that make it so successful. One of those is being a C superset. Obviously a clean syntax is not one of those, but frankly syntax is not that important in the grand scheme of things. Being able to do exactly what I need to do is important.
    I sure was. I see your point, sadly nobody is using D in big projects.

    Quote Originally Posted by elanthis View Post
    The things C++ most needs fixed are changes that will give the programmer more control, less of a "safety net," and which would only mean more features and a larger spec. It would certainly be possible to design an actually successful "better C++" language. This will likely not happen, though, because the potential gain is basically nothing more than "fix a few syntax warts" and that would have to compete with "not compatible with the large body of C++ code already out there." Just not worth it.
    ¿Time for F?

  5. #15
    Join Date
    Sep 2010
    Posts
    453

    Default

    Quote Originally Posted by mirv View Post
    Yes, I do. Typically of course I'll define a macro and use &, |, ^ for bit manipulation. I do use 8bit words, 16bit words, 32bit words, and have dealt with the odd 64bit word in such a manner. Particularly in the latter cases, binary literals would be just plain bad - too great a risk of putting the bits in the wrong place. Far easier with hex.
    I was not talking about bit manipulations!

    When you have a control register with each bit being a flag for something.
    Then it's much more practical to be able to write a binary literal.
    In this situation, working with hex makes it more error prone and complicated.
    If your documentation goes like '... the third bit enables <some random thing>...', especially then it's really practical to be able to use binary literals.

  6. #16
    Join Date
    Oct 2007
    Posts
    912

    Default

    Quote Originally Posted by plonoma View Post
    I was not talking about bit manipulations!

    When you have a control register with each bit being a flag for something.
    Then it's much more practical to be able to write a binary literal.
    In this situation, working with hex makes it more error prone and complicated.
    If your documentation goes like '... the third bit enables <some random thing>...', especially then it's really practical to be able to use binary literals.
    My work is quite a lot of controlling register flags, and they're typically bit manipulation. Any documentation is written in plain text anyway, so you can write binary there just fine (often the manuals use a table to group bit meanings together), but actual code will almost definitely use a macro, which is far more readable than directly using a literal anyway. Even displaying serial line output is far better done in hex as it's more readable (assuming you don't care about the start/stop bits, but if you do then you're looking at timing and probably using a logic analyser anyway).
    So sorry, still don't understand why you think binary literals would be better than hex.

  7. #17
    Join Date
    Sep 2010
    Posts
    453

    Default

    I'm not saying they are better everywhere.
    In some situations they are better as in more practical to use.
    It's not about writing the documentation. It's about when documentation does things per bit it's sometimes more practical to use binary literal.

    You seem to think that it's a contest between the two. Loose that thought.
    Sometimes Hex is more convenient, sometimes binary is.

    Just because you're used to working in it doesn't make it more natural.
    And using a macro for something that should be a core language feature is, in my eyes, a fail!

    Your example of a serial output would be an example, depending on what you're doing where showing hex or bin would be better.
    Last edited by plonoma; 08-15-2011 at 03:18 PM.

  8. #18

    Default

    Quote Originally Posted by Cyborg16 View Post
    You don't use bitset then? But truth be told bit-manipulation is most often used in performance-critical code and
    Code:
    bitset<2> bits;
    ...
    if( bits[1] ){ ... }
    isn't quite as fast as 'if( bits & FLAG ){...}'
    That's a load of bullshit, this can be trivially optimzed away by the compiler.

  9. #19
    Join Date
    Oct 2008
    Posts
    3,038

    Default

    Quote Originally Posted by plonoma View Post
    In some situations they are better as in more practical to use.
    It's not about writing the documentation. It's about when documentation does things per bit it's sometimes more practical to use binary literal.
    I have to agree with mirv here. When would a binary literal ever be a better solution than hex? Can you give an example? In actual code rather than a generic description?

    0x8 = 4th bit set. Each alphanumeric = 4 bits. versus having to count out all those zeroes and making sure you aren't off by one.

    Maybe it just has to do with how comfortable someone is thinking in hex? I've always found it very easy.
    Last edited by smitty3268; 08-16-2011 at 12:15 AM.

  10. #20
    Join Date
    Jul 2009
    Posts
    221

    Default

    Quote Originally Posted by AnonymousCoward View Post
    That's a load of bullshit, this can be trivially optimzed away by the compiler.
    Hmm, I tested bitset in the past and definitely found some performance let-down. You're probably right that the example given could be optimised; it might have been a function like
    Code:
    void testBit(size_t n){
        return bits[n];
    }
    which prevented the optimisation.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •