*with thanks to Paul Delaney ae6jn! 
 


If you're on linux and have trouble with source route filtering from your 
 ISP, download and install iproute2. I understand that this is default 
 install on the newer Red Hat systems and an .rpm is available for 
Mandrake. 
 If it's not available as a package for your system, ftp and compile it. 
You 
 will need to manually install the binaries as there is no install for 
"make 
 install". IPROUTE2 can also co-exist with your current standard tools, 
 however keep in mind that the standard "route" command can only show and 
 maintain your MAIN routing table...thus this is why this is key. 
 


What I do here: 
 #enable the NIC and set local routing for the lan: 
 ifconfig eth0 64.51.9.106 netmask 255.255.255.224 up 
 #create a tunnel to my MFNOS Encap server: 
 ifconfig tunl0 64.51.9.106 pointopoint 64.51.9.100 up 
 #set default routing to my router: 
 route add default gw 64.51.9.99 eth0 
 #set my RF nodes for routing: 
 ifconfig ax0 netmask 255.255.255.240 
 ifconfig nr0 netmask 255.255.255.240 
 ifconfig nr1 netmask 255.255.255.240 
 ifconfig nr2 netmask 255.255.255.240 
 #set a perm arp for my tnc3 running xnet which acts as the RF wan switch: 
 arp -i ax0 -H ax25 -s 44.88.40.1 n1uro-9 
 #allow my commercial IP to use the tunnel for local routing 
 route add -host 44.88.40.5 gw 64.51.9.100 tunl0 
 #set other RF routes accordingly: (via spgma.n1uro.ampr.org [44.88.40.1] 
) 
 route add -host 44.88.40.1 gw 44.88.40.3 ax0 
 route add -net 44.88.44.0/26 gw spgma.n1uro.ampr.org ax0 
 route add -net 44.88.0.0/26 gw spgma.n1uro.ampr.org ax0 
 route add -net 44.88.40.16/28 gw spgma.n1uro.ampr.org ax0 
 route add -net 44.88.40.32/28 gw spgma.n1uro.ampr.org ax0 
 #the above takes care of kernel rules (aka: main route table) 
 #now let's add to the my existing kernel firewall and give it priority: 
 iptables -I INPUT 1 -j ACCEPT -s 44.0.0.0/8 -d 0.0.0.0 
 iptables -I INPUT 1 -j ACCEPT -s 0.0.0.0 -d 44.0.0.0/8 
 iptables -I FORWARD 1 -j ACCEPT -s 44.0.0.0/8 -d 0.0.0.0/0 
 iptables -I FORWARD 1 -j ACCEPT -s 0.0.0.0/0 -d 44.0.0.0/8 
 iptables -I OUTPUT 1 -j ACCEPT -s 44.0.0.0/8 -d 0.0.0.0/0 
 iptables -I OUTPUT 1 -j ACCEPT -s 0.0.0.0/0 -d 44.0.0.0/8 
 #let's also add in encap rules (I run both types here) 
 iptables -I INPUT 1 -j ACCEPT --proto 4 
 iptables -I INPUT 1 -j ACCEPT --proto 93 
 iptables -I OUTPUT 1 -j ACCEPT --proto 4 
 iptables -I OUTPUT 1 -j ACCEPT --proto 93 
 iptables -I FORWARD 1 -j ACCEPT --proto 4 
 iptables -I FORWARD 1 -j ACCEPT --proto 93 
 #using iproute2, let's create a priority rule and make route table 1 
 ip rule add from 44.0.0.0/8 pref 1 table 1 
 #note: I allow 44/8 because of the multitude of subnets I route for 
 #and it is simple to manage this way. you could specify a single 44.x.x.x 
 #ip and it will be fine if that's all you need to route for. 
 # 
 #now let's add routing for table 1 
 ip route add default via 44.88.40.5 dev tunl0 onlink table 1 
 #note: onlink will poll to see if the route is actually active when 
called 
 #and assumes that the 'neighbor' is local. 
 ip route add 44.88.40.1 dev ax0 table 1 
 ip route add 44.88.40.5 dev tunl0 table 1 
 ip route add 44.88.0.0/26 via 44.88.40.1 dev ax0 table 1 
 ip route add 44.88.40.0/26 via 44.88.40.1 dev ax0 table 1 
 ip route add 44.88.44.0/24 via 44.88.40.1 dev ax0 table 1 
 ip route add 44.88.40.11 dev tunl0 via 44.88.40.5 onlink table 1 
 ip route add 44.88.40.2 dev tunl0 via 44.88.40.5 onlink table 1 
 ip route add 44.64.10.0/24 via 44.88.40.1 dev ax0 table 1 
 ip route add 44.64.8.0/24 via 44.88.40.1 dev ax0 table 1 
 ip route add 44.64.0.0/22 via 44.88.40.1 dev ax0 table 1 
 ip route add 44.64.4.0/22 via 44.88.40.1 dev ax0 table 1 
 ip route add 44.68.0.0/18 via 44.88.40.1 dev ax0 table 1 
 # end of config. you may need to rewrite your munge script accordingly. 
 


Now I'll show you the difference between what 'route' and what iproute 
 tables looks like: 
 


root@packet:~# route 
 Kernel IP routing table 
 Destination Gateway Genmask Flags Metric Ref Use Iface 
 qso.n1uro.ampr. ipuro.n1uro.com 255.255.255.255 UGH 0 0 0 eth0 
 spgma.n1uro.amp dx.n1uro.ampr.o 255.255.255.255 UGH 0 0 0 ax0 
 ipuro.n1uro.com gw-uroweb.n1uro 255.255.255.255 UGH 0 0 0 eth0 
 gw.n1uro.ampr.o ipuro.n1uro.com 255.255.255.255 UGH 0 0 0 eth0 
 n1uro.ampr.org ipuro.n1uro.com 255.255.255.255 UGH 0 0 0 eth0 
 44.88.40.32 spgma.n1uro.amp 255.255.255.240 UG 0 0 0 ax0 
 44.88.40.0 * 255.255.255.240 U 0 0 0 ax0 
 44.88.40.0 * 255.255.255.240 U 0 0 0 nr0 
 44.88.40.0 * 255.255.255.240 U 0 0 0 nr1 
 44.88.40.0 * 255.255.255.240 U 0 0 0 nr2 
 44.88.40.16 spgma.n1uro.amp 255.255.255.240 UG 0 0 0 ax0 
 localnet * 255.255.255.224 U 0 0 0 eth0 
 44.88.0.0 spgma.n1uro.amp 255.255.255.192 UG 0 0 0 ax0 
 44.88.44.0 spgma.n1uro.amp 255.255.255.192 UG 0 0 0 ax0 
 44.64.10.0 spgma.n1uro.amp 255.255.255.0 UG 0 0 0 ax0 
 44.64.8.0 spgma.n1uro.amp 255.255.255.0 UG 0 0 0 ax0 
 127.0.0.0 * 255.0.0.0 U 0 0 0 lo 
 44.0.0.0 ipuro.n1uro.com 255.0.0.0 UG 0 0 0 eth0 
 default gw-uroweb.n1uro 0.0.0.0 UG 0 0 0 eth0 
 


Notice the 'default' route is going out via eth0. Also notice, I have 
 another server on my lan running MFNOS which I use as an 'encap' server 
for 
 net-44. While this default route points to my router I also added in a 
 static route table in it to point 44/8 to 64.51.9.100 which is my MFNOS' 
 commercial ip. When configuring your tunnels, you may point your tunnels 
 according to the encap.txt file (which is what my mfnos does so there is 
no 
 need for me to repeat this). 
 


Now let's look at things in iproute2...first the ruleset: 
 root@packet:~# ip rule 
 0: from all lookup local 
 1: from 44.0.0.0/8 lookup 1 
 32766: from all lookup main 
 32767: from all lookup default 
 


Notice rule #1 handles all sourced 44-net IPs out to table # 1 ..so let's 
 look at how table #1 is configured: 
 


root@packet:~# ip route list table 1 
 44.88.40.11 dev tunl0 scope link 
 44.88.40.1 dev ax0 scope link 
 44.88.40.2 via 44.88.40.5 dev tunl0 onlink 
 44.88.40.5 dev tunl0 scope link 
 44.88.0.0/26 via 44.88.40.1 dev ax0 
 44.88.40.0/26 via 44.88.40.1 dev ax0 
 44.64.10.0/24 via 44.88.40.1 dev ax0 
 44.88.44.0/24 via 44.88.40.1 dev ax0 
 44.64.8.0/24 via 44.88.40.1 dev ax0 
 44.64.0.0/22 via 44.88.40.1 dev ax0 
 44.64.4.0/22 via 44.88.40.1 dev ax0 
 44.68.0.0/18 via 44.88.40.1 dev ax0 
 default via 44.88.40.5 dev tunl0 onlink 
 


and we'll compare the main table with the 'route' tool: 
 


root@packet:~# ip route list table main 
 44.88.40.11 via 64.51.9.100 dev eth0 
 44.88.40.1 via 44.88.40.3 dev ax0 scope link 
 64.51.9.100 via 64.51.9.99 dev eth0 
 44.88.40.2 via 64.51.9.100 dev eth0 
 44.88.40.5 via 64.51.9.100 dev eth0 
 44.88.40.32/28 via 44.88.40.1 dev ax0 
 44.88.40.0/28 dev ax0 proto kernel scope link src 44.88.40.3 
 44.88.40.0/28 dev nr0 proto kernel scope link src 44.88.40.3 
 44.88.40.0/28 dev nr1 proto kernel scope link src 44.88.40.3 
 44.88.40.0/28 dev nr2 proto kernel scope link src 44.88.40.3 
 44.88.40.16/28 via 44.88.40.1 dev ax0 
 64.51.9.96/27 dev eth0 proto kernel scope link src 64.51.9.106 
 44.88.0.0/26 via 44.88.40.1 dev ax0 
 44.88.44.0/26 via 44.88.40.1 dev ax0 
 44.64.10.0/24 via 44.88.40.1 dev ax0 
 44.64.8.0/24 via 44.88.40.1 dev ax0 
 127.0.0.0/8 dev lo scope link 
 44.0.0.0/8 via 64.51.9.100 dev eth0 
 default via 64.51.9.99 dev eth0 
 


As programmed, the 'default' route in the main table goes to my router 
 64.51.9.99, while the default route in table #1 goes to 44.88.40.5 VIA 
 device tunl0. 
 


Now let's see if routing to me from the internet sourced as 44.88.40.3 
pass 
 through (using a server hosted by w1uu) 
 


traceroute to dx.n1uro.ampr.org (44.88.40.3), 30 hops max, 38 byte packets 
 [snip unnneded info] 
 16 muir-gw-nodeb-6509.ucsd.edu (132.239.255.163) 89.267 ms 89.045 ms 
 89.331 ms 
 17 mirrorshades.ucsd.edu (128.54.16.18) 89.240 ms 89.353 ms 88.943 ms 
 18 ipuro.n1uro.com (64.51.9.100) 120.065 ms 257.018 ms 282.466 ms 
 19 dx.n1uro.ampr.org (44.88.40.3) 295.958 ms 183.320 ms 526.148 ms 
 


Notice carefully hops #18 and #19. Mirrorshades is providing routing to my 
 MFNOS server which then encaps the 44 route to my linux server and using 
 iproute2 it's able to translate back through MFNOS and out. 
 


Now let's see if I can actually get sourced 44-net routing on the RF wan 
 from connecticut to massachusetts and through. (notice, the route from 
linux 
 to the RF switch is using an axip link to a pc-flexnet host which then 
goes 
 from pc-flexnet via 9600 baud serial to 44.88.40.1 thus using ax25 for 
the 
 transport over ethernet) and out.... 
 


16 muir-gw-nodeb-6509.ucsd.edu (132.239.255.163) 107.526 ms 111.003 ms 
 96.309 ms 
 17 mirrorshades.ucsd.edu (128.54.16.18) 90.368 ms 89.070 ms 89.105 ms 
 18 ipuro.n1uro.com (64.51.9.100) 231.479 ms 282.092 ms 280.000 ms 
 19 packet.n1uro.com (64.51.9.106) 247.887 ms 270.200 ms 249.626 ms 
 20 spgma.n1uro.ampr.org (44.88.40.1) 515.086 ms 551.195 ms 473.011 ms 
 21 icrc.n1uro.ampr.org (44.88.44.1) 2889.424 ms !P 2945.520 ms !P 
 3002.224 ms !P 
 


In this case, iproute2 sees that the sources are 44.88.40.1 and 44.88.44.1 
 accordingly and uses the default route in table #1 to encap itself to 
MFNOS 
 instead of the default route shown in 'route' or in the main table. 
 


the command "ip address" may also be used instead of ifconfig and the 
 command "ip link" may be used to bring an interface up|down. "ip tunnel" 
may 
 also be used to manage your tunnel interfaces and supports ipip|gre|sit 
 tunnels. Nice to have if you wish to play with ipv6. 
 


Of course you may easily simplify this if you're not routing for any other 
 net-44 ips or subnets by using the following: 
 


#!/bin/bash 
 # 
 ip rule add from {your-44-net-ip} pref 1 table 1 
 ip route add {tunnel-host-ip} via {your-default-gw-ip} dev eth0 onlink 
 ip route add default dev tunl0 via {tunnel-hosts-44-net-ip} onlink table 
1 
 # 
 #note: the last 'ip route' statement may need to be applied for the 
routes 
 #in encap.txt thus you may need to rewrite your munge script accordingly. 
 


With more ISPs incorporating source-route filtering it may be a good idea 
to 
 configure this whether or not you actually need it now...this will combat 
 the dreaded 'is mirrorshades down? I don't have routing anymore' syndrome 
:-)