- 1 Administration
- 1.1 How can I get Spider to restart automatically if it crashes?
- 1.2 How can I monitor traffic to and from a node or user?
- 1.3 I see spots coming in my debug log, but none go out to the users
- 1.4 My neighbouring node cannot use the RCMD command to me, he just keeps getting the "tut tut" message
- 1.5 I do not seem to be sending any bulletin mail to my link partners, what is wrong?
- 1.6 How can I automatically limit the amount of debug logfiles that are stored?
- 1.7 I updated my Linux distribution and now Spider cannot read the users file or the dupefile, what is the problem?
- 1.8 Since I last updated I seem to be getting duplicate spots appearing
- 1.9 I have deleted a message but it is still there, why?
- 1.10 I have updated from CVS and I get all sorts of errors when I restart
- 1.11 I have done a CVS update, restarted and it says that "fileX" is missing
How can I get Spider to restart automatically if it crashes?
Put this line into /etc/inittab ..
DX:3:respawn:/bin/su -c "/usr/bin/perl -w /spider/perl/cluster.pl" sysop > /dev/tty7
Run telinit q as root. Spider will restart so be aware. However, any time you reboot, cluster.pl will start in tty7 and if it crashes, it should restart ok.
How can I monitor traffic to and from a node or user?
There are 2 ways to achieve this. You can use the tail command like this ..
tail -f /spider/data/debug/167.dat |grep G0VGS
or in later versions of Spider, there is a command called watchdbg in which case you simply type ..
I see spots coming in my debug log, but none go out to the users
Please check the time on your PC.
All spots are checked that they are no more than 15 minutes in the future and 60 minutes in the past. If your clock on your client prompt (or console.pl display) is not set to the correct time in GMT (UTC) and is more than one hour out (say on your local (summer) time) then the test will fail and no spots will come out. Neither will they be stored.
If you are connected to the internet, most linux distributions have an implementation of ntpd. The Microsoft Windows 2003, XP, 2000 and NT machine clock can also be set to be synchronized to an NTP source. This can be done in the standard time configuration screen. There is also the simple nettime program for Windows 95/98/ME.
My neighbouring node cannot use the RCMD command to me, he just keeps getting the "tut tut" message
Assuming that the permissions are set correctly (perm level 5 required), it could be that the home_node is set incorrectly. You can reset the home_node using the spoof command like this ..
spoof gb7adx set/home gb7adx
Assuming that the node_call you are changing is gb7adx.
There is a file in /spider/msg called forward.pl.issue. Rename this to forward.pl and edit it to meet your requirements. You will need to issue the command load/forward or restart Spider for the changes to take effect.
How can I automatically limit the amount of debug logfiles that are stored?
Use the tmpwatch command. Create a file in /etc/cron.daily/ containing the line ...
/usr/sbin/tmpwatch -f 240 /spider/data/debug
Remember to make it executable! This will limit your debug data down to the last 10 days.
However, modern versions of DXSpider will do this for you, so this is now probably unnecessary.
I updated my Linux distribution and now Spider cannot read the users file or the dupefile, what is the problem?
Almost certainly this is a change in the db format of perl. Follow these few steps to correct the problem.
- stop the cluster (disable any autostart in inittab)
- cd /spider/data
- issue the command: perl user_asc
- restart the cluster
That should solve the problem.
Since I last updated I seem to be getting duplicate spots appearing
What has probably happened is that the dupefile has got corrupted in some way. Simply delete the /spider/data/dupefile and restart the cluster. It may take a little time to become fully functional but should solve your problem.
I have deleted a message but it is still there, why?
This is now the way messages are handled for deletion in Spider. If you look closely you will see a 'D' following the message number. This message is marked for deletion and will be deleted in 2 days if nothing further is done. Optionally you can use the command delete/expunge to delete it immediately.
I have updated from CVS and I get all sorts of errors when I restart
Whenever you update from CVS, a log is displayed. Next to each file that is downloaded there is a letter, e.g.:
? fred.pl ? jim .. . .. cvs server: Updating perl P cluster.pl C Messages M Internet.pm U DXProt.pm .. . ..
For normal CVS use you should only ever see the letters 'P', 'U' or '?'. The letter 'P' means that the file has changed in CVS and is more recent than the one that is currently on your system. You will also see the letter '?', which means that there is a file that you (or the system) has created that CVS doesn't know about and isn't under its control. These are all normal and good.
Sometimes you will see the letter 'U' next to a file. This means that it is a new file that you don't currently have. This is also OK.
However, if you see the letter 'C' or 'M', that means that CVS thinks that the file has changed locally. For the letter 'C', it has changed sufficiently near to one or more modifications which CVS wants to download to your system. For the 'M', CVS thinks that it is OK to make the change (you may also see some messages about "merging revision 1.xx with 1.yy"). Neither of these things are good. Files that are under the control of CVS must not be changed by sysops. It is the files that have the 'C' next to them that will show the errors that you are complaining about and they will be things like:-
Syntax error near '<<<<' at line 23 Syntax error near '===' at line 40 Syntax error near '>>>' at line 51
You will not necessarily see all of the errors shown but you will get one or more sets of some of them. The cure is simple:
- identify the file that is causing the problem.
- remove the file.
- run the cvs update again.
You will see that file come back (with a letter 'U' next to it). That will be the correct file as CVS thinks it should be. If you still have a problem, then get onto the dxspider-support mailing list.
If all else fails (or you have several conflicts) you can safely remove the entire /spider/perl and /spider/cmd directories and then run the cvs update. They will all be recreated in their pristine condition.
I have done a CVS update, restarted and it says that "fileX" is missing
The correct way to run cvs is:-
cd /spider cvs -z3 update -d
The '-d' is crucial. It makes sure that any new directories, that may contain new code, are created and that new code downloaded. I have absolutely no idea why this is not the default, seeing as CVS (in other circumstances) happily recurses its merry way down a directory tree, but there you are.
WinCVS and other graphical CVS frontends have a checkbox for the update screen called something like "create sub-directories" (it may be hidden in some sub-screen - go look for it if it isn't obvious). Make sure that this box is checked. If you can make this the default setting in the program's setup screen then please do that. It will save you a lot of pulled hair.