c - Getting more precise timing control in Linux -


i trying create low-jitter multicast source digital tv. program in question should buffer input, calculate intended times pcr values in stream , send packets @ relatively precise intervals. however, not running on rtos, timing variance expected.

this basic code (the relevant variables initialized, omitted code here):

while (!sendstop) {      //snip     //put 7 mpeg packets in 1 udp packet buffer "outpkt"     //snip       waittime = //calculate pcr values - value in microseconds     //waittime in order of 2000 -> 2ms               sleeptime=curtime;               sleeptime.tv_nsec += waittime * 1000l;                     sleeptime.tv_sec += sleeptime.tv_nsec / 1000000000;                     sleeptime.tv_nsec %= 1000000000;               while (clock_nanosleep(clock_monotonic, timer_abstime, &sleeptime, null) && errno == eintr) {                     printf("i");               }                      sendto(sck,outpkt,1316,0,res->ai_addr,res->ai_addrlen);    //send packet                      clock_gettime(clock_monotonic,&curtime);                             } 

however, results in sending being slow (since there processing takes time), buffer fills up. so, thought should difference between "sleeptime" (the time should have been) , "curtime" (the actual time) , subtract future "waittime". works, bit fast , empty buffer.

my next idea multiply difference value before subtracting it, (just above "while..."):

difn=curtime.tv_nsec-ostime.tv_nsec; if (difn<0) difn+=1000000000;    sleeptime.tv_nsec = sleeptime.tv_nsec-(difn*difnc)/1000;  //difnc - adjustment    if (sleeptime.tv_nsec<0) {         sleeptime.tv_nsec+=1000000000;         sleeptime.tv_sec--;    } 

however, different values of difnc work @ different times of day, servers , on. there needs kind of automatic adjustment based on operation of program. best figure out increment/decrement every time buffer full or empty, however, leads slow cycles of "too fast" - "too slow". tried adjust "difnc" value based on how full/empty buffer leads of "slow"-"fast" cycles.

how can automatically derive "difnc" value or there other method of getting more precise timing "clock_nanosleep" function without busy waits (the server has other things do)?


Comments