#include < stdio.h >
#include < stdlib.h >
#include < time.h >
#include < math.h >
int main();
int main() {
long int i,a;
clock_t cputime,cputime1;
double timing,timing0;
cputime= clock();
for(i=0;i<10000000;i++){
a=pow(4,2);
a=pow(4,2);
a=pow(4,2);
a=pow(4,2);
a=pow(4,2);
}
cputime1= clock();
timing0=((double) cputime1-cputime);
timing=((double) cputime1-cputime) / CLOCKS_PER_SEC;
printf("timing =%12.9f clock per sec=%d timing0=%12.9f\n",
timing,CLOCKS_PER_SEC,timing0);
exit(0);
}
Wednesday, 20 February 2008
CPU Timing in a C code
Examine this simple example :
Thursday, 14 February 2008
Latex Poster: A working full example
You must use Latex to generate your posters, because it looks beautiful. Here is fully working and self contained example [latex_poster].
Wednesday, 13 February 2008
Simple scripts: Compressing Files & Cluster Queue Jobs Termination
If you would like to compress your data files or any type recursively you can use this simple perl script [datcompress].
Sometimes it is annoying to delete all your PBS jobs one by one, so simple perl script helps [deleteall].
Sometimes it is annoying to delete all your PBS jobs one by one, so simple perl script helps [deleteall].
Tuesday, 12 February 2008
How to benchmark communication cost in the beowulf (parallel system)
Quite long time ago, I've proposed one simple way of estimating
communication cost of an MPI program (or multiple-instruction)
appeared in the beowulf mailing list. Here is the outline of
the procedure.
[original post]
Definition Communication cost (Comm) is the ratio between the real
time used by all processors (processes actually if you use MPI)
and global real cpu time (GRU).
Undefined symbols are as follows:
In the actual MPI program this measure's implementation could be outlined
as follows
communication cost of an MPI program (or multiple-instruction)
appeared in the beowulf mailing list. Here is the outline of
the procedure.
[original post]
Definition Communication cost (Comm) is the ratio between the real
time used by all processors (processes actually if you use MPI)
and global real cpu time (GRU).
Comm=GRU/GR = (GR-GU-GS)/GR
Undefined symbols are as follows:
Comm= Communication cost
GR=global real time
LR=local worked real time
LU=local user
LS=local system
GS=global system
GU=global user
n=# of procs
GU=n*LU
GS=n*LS
GR=n*LR
In the actual MPI program this measure's implementation could be outlined
as follows
BEGIN PROGRAM Myprogram
begin distribution
initial_timings=timings;
...do whatever is your distributed computing....
end_timings=timings;
end distribution
if(myrank == 0) {compute communication cost}
END Myprogram
BEGIN PROC timings
for(n=0;n < numproc;numproc) {
if(n==myrank) { measure timing of myrank and send to rank 0 }
}
END PROC
Subscribe to:
Posts (Atom)