HowTo: Elastix DAHDI Trunk Routing with DID

HowTo: Elastix DAHDI Trunk Routing with DID

If you have multiple FXO (PSTN) lines into your PBX, it is always nice to be able to route these in-bound calls based on the physical line they arrive upon.  Getting this working with DAHDI in Elastix has been driving me up the wall!

 

This issue has been bugging me for over a week now and I have finally got it to work.  I have two trunks connected via FXO modules on a TDM400 card, but I could not get the DID working with them (CLI with BT sorted).  But once Asterisk had the call, I could not make Asterisk make a decision with call based on which number/line the caller called.  Not the number the caller is calling from, this is CLI or CID, but the number they dialled to make your line ‘ring’.

Asterisk was either saying there was no route and answering the call to say the number you have called is not in service, or just handling the 2 lines in the same way – i.e. it could not tell them apart.  Here I detail my findings so you can process lines automatically.

I had most of the configuration right, but I had to hand edit another configuration file to actually to get the changes made via the web interface actually working.  Trying to find this last little bit of information on the forums has been maddening to say the least.

Changing the route

First you need to correct the router handler, by changing a setting in a configuration file.  There is no graphical interface for this I’m afraid and it is the only file you need to manually edit by a suitable means.

The default setting in this configuration file is ‘from-pstn’ and this needs to be changed to ‘from-zaptel’.  You need to edit:

/etc/asterisk/dahdi-channels.conf

You need to find the correct section for your line connection.  For me this was lines 3 & 4.  Below is the example original settings for my channel 3:

;;; line=”3 WCTDM/4/2 FXSKS”
signalling=fxs_ks
callerid=asreceived
group=0
context=from-pstn
channel => 3
callerid=
group=
context=default

And you need to edit this for each channel to become like this:

;;; line=”3 WCTDM/4/2 FXSKS”
signalling=fxs_ks
callerid=asreceived
group=0
context=from-zaptel
channel => 3
callerid=
group=
context=default

Then save the file back and restart Asterisk.

Marking the Channel DID

The next stage is to assign DID numbers to these channels so a decision can be made on how to process the call based on line ID.

Elastix does not have an interface to the required facility, so you need to un-embed the FreePBX console, details are here.

Once in the FreePBX console, you need to choose ‘ZAP Channel DIDs’ from the menu on the left.  You should get a screen similar to:

ZAP DIDs

ZAP DIDs

It is quite simple to complete, needing only 3 bits of information:

  • Channel – The DAHDI channel you are assigning the DID to.
  • Description – Your description for this allocation.  I would suggest an name and a summary of the DID you will be allocating.
  • DID: The DID number need to call to make this channel ‘ring’.

An example UK configuration might look like this for channel 3, used to be routed (Inbound Routes) to the sales department for the number: 01234-123456:

ZAP DID Sample

ZAP DID Sample

Once completed, you can click ‘Submit Changes’.  You need to repeat this for each FXO port you have for inbound calls.

You can then save the changes back and configure the ‘Inbound Routes’ to actually ‘route’ the calls where you want them.

You can actually use almost any number in the DID, but I suggest you use the full number, including the STD, in case you have any ‘out of area’ number.  And it generally reduces confusion in the future.

Migrating from hypervm (opensource xen) to citrix xenserver

*note this is a really old draft I just published so some of the stuff are old but I figure it might be useful for some people.*

1st step, get VM ready for migration

ssh to the VM you’re migrating.

 yum install grub kernel-xen 
 cd /boot/grub
 wget http://files.viviotech.net/migrate/menu.lst

 ln -s /boot/grub/menu.lst /etc/grub.conf

 ln -s /boot/grub/menu.lst /boot/grub/grub.conf

 cd /etc
 wget http://files.viviotech.net/migrate/modprobe.conf

 mv /etc/fstab /etc/fstabold
 cd /etc
 wget http://files.viviotech.net/migrate/fstab

NOTE: If you want to boot up the hyperVM VM then all you have to do is put the old fstab back in and it’ll boot backup.

2nd step is to convert the LVM to img

You can use either dd to clone the partition to a disk image OR manually dd an image and use cp -rp. Suggestion is to use method 1 for all VMs that is 20 gig. If its 40 gig, method 1 is suggested if usage is over 15-20 gig otherwise method 2 is faster. Obviously for those with 80 gigs, method 2 is much faster unless the VM is using alot of space. The problem is that in some cases (IE heavily loaded platform) cp can take a substantial amount of time. Your mileage may vary.
Before you do anything MAKE SURE THE VM IS SHUTDOWN

Method 1

 dd if=/dev/VolGroup00/migrationtesting_rootimg of=/root/migrationtesting.img bs=4096 conv=noerror

Method 2

Create a 10 gig image file, Change this according to how much space customer is using (IE a little more then what is being used)

 dd if=/dev/zero of=/root/migrationtesting.img bs=1M count=1 seek=10240

#format it

 mkfs.ext3 migrationtesting.img

#Mount it

 mkdir /root/migratingtesting
 mount -o loop migratingtesting.img migratingtesting

#Copy the mounted files over from hyperVM and unmount afterward

 cp -rp /home/xen/migrationtesting.vm/mnt/* /root/migratingtesting
 umount /root/migratingtesting

3nd step (Convert the image file to xva file)

NOTE: USE ONLY the NFS mount IF you don’t have enough local space for the converted file. I learned that the conversion takes quite a long time so I believe creating the xva file locally and then cping it to NFS is actually faster. This is probably the same reason as exporting from xenserver takes 3 times as long as importing.
grab citrix’s python file to convert. I saved it locally here just in case.

 wget http://files.viviotech.net/migrate/xva.py

#run the file and dump the converted file to the nfs mount. -n is the name that appears when you are done importing. The converted file will not work unless --is-pv is there.

 python /root/xva.py -n migratingtesting --is-pv --disk /root/migratingtesting.img --filename=/mnt/migrateNFS/tim/migratingtesting.xva

4th step (Import, resize disk, add swap, tools)

#Import VM in the platform.

 xe vm-import filename=/mnt/migrateNFS/tim/migratingtesting.xva

#If method 2 is used then you'll have to resize the HD. Just increase it back to orginal size before booting it up and run

 resize2fs /dev/xvda

#Also add a new disk thru xencenter for the swap. It'll probably be xvdb. Run fdisk -l to make sure its xvdb
 fdisk -l

#now create and enable swap. You shouldn't need to check /etc/fstab because I've already made xvdb the swap in there.

 mkswap /dev/xvdb
 swapon /dev/xvdb
 free -m
#Double check /etc/fstab and change accordingly IF the new drive isn't xvdb.
#I notice that I wasn't able to mount the xentools. I've tarred up the xentools file for easy access (5.6 SP2)

 wget http://files.viviotech.net/migrate/Linux.tar
 tar -xvvf Linux.tar
 cd Linux
 ./install.sh

menu.lst

Just for reference, menu.lst. Obviously the latest kernel at this time is 2.6.18-238.19.1.el5xen; if that change this file will have to be edited with the new kernel.

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You do not have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /, eg.
#          root (hd0,0)
#          kernel /boot/vmlinuz-version ro root=/dev/sda1
#          initrd /boot/initrd-version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-194.el5)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.18-238.19.1.el5xen ro root=/dev/xvda
        initrd /boot/initrd-2.6.18-238.19.1.el5xen.img

fstab

 /dev/xvda               /                       ext3    defaults,usrquota,grpquota        1 1
 devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
 tmpfs                   /dev/shm                tmpfs   defaults        0 0
 proc                    /proc                   proc    defaults        0 0
 sysfs                   /sys                    sysfs   defaults        0 0
 /dev/xvdb               swap                    swap    defaults        0 0

modprobe.conf

 alias eth0 xennet
 alias eth1 xennet
 alias scsi_hostadapter xenblk

http://darvil.com/?p=221

 

solve-drbd-split-brain-4-steps

Solve a DRBD split-brain in 4 steps

Whenever a DRBD setup runs into a situation where the replication network is disconnected and fencing policy is set to dont-care (default), there is the potential risk of a split-brain. Even with resource level fencing or STONITH setup, there are corner cases that will end up in a split-brain.

When your DRBD resource is in a split-brain situation, don’t panic! Split-brain means that the contents of the backing devices of your DRBD resource on both sides of your cluster started to diverge. At some point in time, the DRBD resource on both nodes went into the Primary role while the cluster nodes themselves were disconnected from each other.

Different writes happened to both sides of your cluster afterwards. After reconnecting, DRBD doesn’t know which set of data is “right” and which is “wrong”.

Indications of a Split-Brain

The symptoms of a split-brain are that the peers will not reconnect on DRBD startup but stay in connection state  StandAlone or WFConnection. The latter will be shown if the remote peer detected the split-brain earlier and was faster at shutdown its connection. In your kernel logs you will see messages like:

kernel: block drbd0: Split-Brain detected, dropping connection!

4 Steps to solve the Split-Brain

  1. Manually choose a node which data modifications will be discarded. We call it the split brain victim. Choose wisely, all modifications will be lost! When in doubt run a backup of the victim’s data before you continue.
  2. When running a Pacemaker cluster, you can enable maintenance mode. If the split brain victim is in Primary role, bring down all applications using this resource. Now switch the victim to Secondary role:
    victim# drbdadm secondary resource

    2.5  Disconnect the resource if it’s in connection state WFConnection:

    victim# drbdadm disconnect resource

  3. Force discard of all modifications on the split brain victim:
    victim# drbdadm -- --discard-my-data connect resource

    for DRBD 8.4.x:

    victim# drbdadm connect --discard-my-data resource
  4. Resync will start automatically if the survivor was in WFConnection network state. If the split brain survivor is still in Standalone connection state, reconnect it:survivor# drbdadm connect resource

At the latest now the resynchronization from the survivor (SyncSource) to the victim (SyncTarget) starts immediately. There is no full sync initiated but all modifications on the victim will be overwritten by the survivor’s data and modifications on the survivor will be applied to the victim.

Background: What happens?

With the default after-split-brain policies of disconnect this will happen always in dual primary setups. It can happen in single primary setups if one peer changes at least once its role from Secondary to Primary while disconnected from the previous (before network interruption) Primary.

There are a variety of automatic policies to solve a split brain but some of them will overwrite (potentially valid) data without further inquiry. Even with theses policies in place a unresolvable split-brain can occur.

The split-brain is detected once the peers reconnect and do their DRBD protocol handshake which also includes exchanging of the Generation Identifiers (GIs).