Channel Bonding NIC's with RHAS 2.1...

Started by Tazinator, June 25, 2004, 12:57:50 AM

Previous topic - Next topic
Ok, ive now spent the last 3 days trying to get Channel Bonding to work correctly on 4 Red Hat AS 2.1 servers. Each with 4 cards total.

  • (2) Broadcom GigE (5700) using the Targon-3 driver (tg3)
  • (2) Intel Pro 1000 cards using the (e1000) driver.
Kernel Version: 2.4.9-e.40smp

-------------------------------------

Ok, now, for whatever damn reason, bonding both pairs of cards together doesnt want to work. I have tried everything imaginable to make it work. No luck thus far.

  • Scene 1:
    bond0 = Broadcom's
    bond1 = Intel's
    No dice.
  • Scene 2:
    bond0 = Intel's
    bond1 = Broadcom's
    Again, no dice.
  • Scene 3:
    bond0 = Intel's
    no bond for Broadcom's - Running solo
    And yes, no dice.
  • Scene 4:
    bond0 = Broadcoms
    no bond for Intel's - Running solo
    50% working. The bond worky, but only in mode=0 despite me telling it mode=1   ::)
So now you may be saying, "Dont bond the damn things, just run em solo." Eh, cant. They run Oracle 10G DBase and it needs to be configured for RAC. The 2 Intels are the internal DBase communications between the servers (an intra-intranet). Broadcom's are for the outside world. Because both servers utilize a SAN, they need to talk to each other unimpeded to let each other know what data has been written to the SAN.

This this whole escipade is what also forces me to use RHAS 2.1 instead of 3 because the SAN drivers are only cert'd for that version and that kernel.

Joy...

If anyone has any advice to offer here from past similar experience, im all ears. I plan to try getting teaming to work using BASP and Intels teaming app next since this has ended me nowhere.  >:(
"A well known hacker is a good hacker, an unknown hacker is a great hacker..."

I don't care what your parents told you, you aren't special.
  • https://github.com/tazinator

Taz,  Not sure if this will help but might be worth looking into. We just installed an IBM server with dual on-board Broadcom GigE NIC's as well. We had added a dual Intel 1000 Pro card into the server as well. We were going to team them (2 teams) but using Win2k. The drivers that came with the Intel cards wouldn't work correctly, so we downloaded the updated versions that included the Intel management addons for those cards to allow teaming. When we installed the new drivers, it updated the Broadcom NIC's to those drivers as well and allowed us to team them. Not sure if your Broadcom's are the same as ours, but if it's the same chipset, might be worth a try using the Intel drivers for those as well and it may cure your bonding issue. I know it worked for us in Win, but haven't tested it in *nix. Hopefully this is what you meant... Let me know.
*** Sleep: A completely inadequate substitute for caffeine. ***
01010010010101000100011001001101

Hi all,

We've the same problem to permit the channel bonding on red hat system. So our configuration is :

- 2 intel epro1000
- 2 tg3

on RHS 2.1 e40.

To resolve this, we'are trying to use BASP (now ! and we hope it solves our problems)... so before you must upgrade your tg3 in your modules.conf. You've tu use bcm5700-7.1.9-e.40.i386.rpm and replace tg3 by bcm5700 in modules.conf.

I'm interesetd in many experiences about channel bondind on RHS.

nicolas

Actually, I have come to a resolution on this using BASP and iANS together. I will try and post my fix tonight for you.
"A well known hacker is a good hacker, an unknown hacker is a great hacker..."

I don't care what your parents told you, you aren't special.
  • https://github.com/tazinator

I appologize for the delay in posting a how-to fix for this. Ive been slammed with work (company just moved to new office space) and still re-couping from Defcon ;) I will do my best to post this week.
"A well known hacker is a good hacker, an unknown hacker is a great hacker..."

I don't care what your parents told you, you aren't special.
  • https://github.com/tazinator

SMF spam blocked by CleanTalk