There were some reported non-functioning --delete flag when using rsync in syncronizing two filesystems/locations. This may occur due to failures or errors during rsync tries to sync zero size files. One simple work around for this is using --ignore-errors flag. Some examples for taking backup of user directory:
rsync -vaz -e ssh --delete --ignore-errors user@host:/home/user/ local_dir/
rsync -vaz --delete --ignore-errors /home/user/ /media/disk/backup_dir/
Wednesday, 15 October 2008
Thursday, 14 August 2008
TCL Bad Code Error
Using tcl extension would be quite reasonable to write a flexable code. How ever if you use parse command set and TCL_OK as a return, be sure that type of your function is int, otherwise tcl complains with a strange run time error, saying bad code. So, be sure about the correct return type.
Monday, 21 July 2008
Traffic Utilization on router high CPU usage.
Long time ago, I was trying to address traffic utilization issues on border routers. This is the summary of my communication with nsp mailing list [Traffic Utilization Summary].
Thursday, 6 March 2008
Simple syncronization via ssh with rsync.
One can syncronize two different locations by using rsync via ssh or without ssh.
Here are the steps:
1. get from the other location
rsync -avuzb dir/mydir/ .
2. put into other location
rsync -Cavuzb . dir/mydir/
If one of the location is reachable via ssh then one ca write
rsync -avz -e ssh remoteuser@remotehost:/remote/dir /this/dir/
Look at the man pages for more details. Actually -e flag is not needed.
Here are the steps:
1. get from the other location
rsync -avuzb dir/mydir/ .
2. put into other location
rsync -Cavuzb . dir/mydir/
If one of the location is reachable via ssh then one ca write
rsync -avz -e ssh remoteuser@remotehost:/remote/dir /this/dir/
Look at the man pages for more details. Actually -e flag is not needed.
Wednesday, 20 February 2008
CPU Timing in a C code
Examine this simple example :
#include < stdio.h >
#include < stdlib.h >
#include < time.h >
#include < math.h >
int main();
int main() {
long int i,a;
clock_t cputime,cputime1;
double timing,timing0;
cputime= clock();
for(i=0;i<10000000;i++){
a=pow(4,2);
a=pow(4,2);
a=pow(4,2);
a=pow(4,2);
a=pow(4,2);
}
cputime1= clock();
timing0=((double) cputime1-cputime);
timing=((double) cputime1-cputime) / CLOCKS_PER_SEC;
printf("timing =%12.9f clock per sec=%d timing0=%12.9f\n",
timing,CLOCKS_PER_SEC,timing0);
exit(0);
}
Thursday, 14 February 2008
Latex Poster: A working full example
You must use Latex to generate your posters, because it looks beautiful. Here is fully working and self contained example [latex_poster].
Wednesday, 13 February 2008
Simple scripts: Compressing Files & Cluster Queue Jobs Termination
If you would like to compress your data files or any type recursively you can use this simple perl script [datcompress].
Sometimes it is annoying to delete all your PBS jobs one by one, so simple perl script helps [deleteall].
Sometimes it is annoying to delete all your PBS jobs one by one, so simple perl script helps [deleteall].
Tuesday, 12 February 2008
How to benchmark communication cost in the beowulf (parallel system)
Quite long time ago, I've proposed one simple way of estimating
communication cost of an MPI program (or multiple-instruction)
appeared in the beowulf mailing list. Here is the outline of
the procedure.
[original post]
Definition Communication cost (Comm) is the ratio between the real
time used by all processors (processes actually if you use MPI)
and global real cpu time (GRU).
Undefined symbols are as follows:
In the actual MPI program this measure's implementation could be outlined
as follows
communication cost of an MPI program (or multiple-instruction)
appeared in the beowulf mailing list. Here is the outline of
the procedure.
[original post]
Definition Communication cost (Comm) is the ratio between the real
time used by all processors (processes actually if you use MPI)
and global real cpu time (GRU).
Comm=GRU/GR = (GR-GU-GS)/GR
Undefined symbols are as follows:
Comm= Communication cost
GR=global real time
LR=local worked real time
LU=local user
LS=local system
GS=global system
GU=global user
n=# of procs
GU=n*LU
GS=n*LS
GR=n*LR
In the actual MPI program this measure's implementation could be outlined
as follows
BEGIN PROGRAM Myprogram
begin distribution
initial_timings=timings;
...do whatever is your distributed computing....
end_timings=timings;
end distribution
if(myrank == 0) {compute communication cost}
END Myprogram
BEGIN PROC timings
for(n=0;n < numproc;numproc) {
if(n==myrank) { measure timing of myrank and send to rank 0 }
}
END PROC
Subscribe to:
Posts (Atom)