Page 3 of 4 FirstFirst 1234 LastLast
Results 21 to 30 of 40

Thread: Support for kernel 2.6.29?

  1. #21
    Join Date
    Jul 2008
    Posts
    1,730

    Default

    a) I don't use compiz - because it is a broken POS, ever was
    b) I don't use lenny. I don't use debian. I never will
    c) I don't use stone old software. So no Xserver 1.4.X but something more recent - 1.6.1
    d) even with earlier xorg-server versions I never saw that mysterious 'tearing'.
    e) I am using tvtime which does not work without xv - and it works well.

  2. #22
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,538

    Default

    Video tearing is wierd; some people see it easily and it really bugs them, other people (like most developers ) don't see it at all. I was checking out the open drivers that shipped with Jaunty and watching Big Buck Bunny - I thought it was real smooth, but one of the guys from our multimedia team came over and was pointing out tears every few seconds. I never saw a single one.

  3. #23
    Join Date
    Aug 2007
    Posts
    6,645

    Default

    My eyes are trained to see those errors, i saw em with vdpau + enabled composite at once too. But compared to fgrlx the oss ati is smooth. Nvidia is definitely the best driver for video but ati oss is not bad.

  4. #24
    Join Date
    Jul 2008
    Posts
    1,730

    Default

    hm, I am not too distortion resistant - I can't stand 75Hz on a crt for example. But video looks good on my desktop...

  5. #25
    Join Date
    Jun 2006
    Location
    Poland
    Posts
    117

    Default

    Quote Originally Posted by Kano View Post
    Well 2.6.29 support can be patched, with a smaller patch, when your distro provides some extra files or a very huge one. But 2.6.30 seems to be really problematic. When you make it compile then it uses 2 symbols which are not in the kernel anymore, one could possibly patched, but the other is in the binary part, so no go. I do not understand that when ati only provides drivers for the latest cards that they are not able to try a new kernel. Ubuntu even has got a collection of every mainline kernel incl. rc, so they would not even require to compile it on their own. Nvidia somehow manages for their current cards, for the others i am still waiting for official 2.6.30 support but 2.6.29 is there for every card.
    I think you are talking about "pci_enable_msi".
    I just make small place holder function like this:
    #undef pci_enable_msi
    int pci_enable_msi(struct pci_dev *pdev)
    {
    int pci_out;
    pci_out=pci_enable_msi_block(pdev, 1);
    return pci_out;
    }
    and add it somewhere in fglrx module.
    I don't guarantee that it will actually work because last time I checked this, I was on 2.6.30-rc2 and since then I deleted my patch.

  6. #26
    Join Date
    Aug 2007
    Posts
    6,645

    Default

    Interesting way, can you create a better 2.6.29 patch too without new includes?

  7. #27
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,740

    Default

    Quote Originally Posted by sobkas View Post
    I think you are talking about "pci_enable_msi".
    I just make small place holder function like this:
    #undef pci_enable_msi
    int pci_enable_msi(struct pci_dev *pdev)
    {
    int pci_out;
    pci_out=pci_enable_msi_block(pdev, 1);
    return pci_out;
    }
    and add it somewhere in fglrx module.
    I don't guarantee that it will actually work because last time I checked this, I was on 2.6.30-rc2 and since then I deleted my patch.
    Hmm, I don't essentially see why the '#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)' from the kernel sources should work different from the code you gave. That is, as far as I can see your code is the same as

    #undef pci_enable_msi
    int pci_enable_msi(struct pci_dev *pdev)
    {
    return pci_enable_msi_block(pdev, 1);
    }

    which should be the same as the macro, no?
    ps. The macro was checked from linux-2.6.30-rc5
    Edit: Btw, make sure you have CONFIG_PCI_MSI in kernel enabled or none of this will work. Iirc Catalyst warned about it at one point.
    Last edited by nanonyme; 05-09-2009 at 04:18 PM.

  8. #28
    Join Date
    Jun 2006
    Location
    Poland
    Posts
    117

    Default

    Quote Originally Posted by nanonyme View Post
    Hmm, I don't essentially see why the '#define pci_enable_msi(pdev) pci_enable_msi_block(pdev, 1)' from the kernel sources should work different from the code you gave. That is, as far as I can see your code is the same as

    #undef pci_enable_msi
    int pci_enable_msi(struct pci_dev *pdev)
    {
    return pci_enable_msi_block(pdev, 1);
    }

    which should be the same as the macro, no?
    ps. The macro was checked from linux-2.6.30-rc5
    Edit: Btw, make sure you have CONFIG_PCI_MSI in kernel enabled or none of this will work. Iirc Catalyst warned about it at one point.
    A macro is just a macro. It's instruction for preprocessor to replace one text with another and preprocessor only works on source code.
    It wasn't meant to even touch binaries.

    With fglrx the main problem is that the pci_enable_msi is used by binary only module:
    nm libfglrx_ip.a.GCC4|grep pci_enable_msi
    U pci_enable_msi
    There must be function called pci_enable_msi or module wont work.

    In short macro is provided to keep API backward compatible while changing ABI.
    Essentially preprocessor just replace text "pci_enable_msi(pdev)" with "pci_enable_msi_block(pdev, 1)".

  9. #29
    Join Date
    Aug 2008
    Location
    Finland
    Posts
    1,740

    Default

    Quote Originally Posted by sobkas View Post
    A macro is just a macro. It's instruction for preprocessor to replace one text with another and preprocessor only works on source code.
    It wasn't meant to even touch binaries.

    With fglrx the main problem is that the pci_enable_msi is used by binary only module:
    nm libfglrx_ip.a.GCC4|grep pci_enable_msi
    U pci_enable_msi
    There must be function called pci_enable_msi or module wont work.

    In short macro is provided to keep API backward compatible while changing ABI.
    Essentially preprocessor just replace text "pci_enable_msi(pdev)" with "pci_enable_msi_block(pdev, 1)".
    Point taken. You don't still need the integer though, right?

  10. #30
    Join Date
    Jun 2006
    Location
    Poland
    Posts
    117

    Default

    Quote Originally Posted by nanonyme View Post
    Point taken. You don't still need the integer though, right?
    What integer?
    You are talking about "int pci_out;"?
    Not really, I just like this way of writing functions(debugging and all).

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •