I am measuring the performance as the execution time of a C function on the MicroBlaze core. The execution time is measured in the target system by using a 1kHz interrupt (generated using an opb_timer peripheral) that continuously increments a counter variable. A time stamp is taken from this counter before and after executing the C function 1000 times and the execution time is determined from the difference between the two. I run this design both in EDK6.3i and EDK7.1i. I see that the performance in 7.1i is far less than the performance I got in 6.3i. I am using floating point function as well in my design.
Why am I seeing this performance change?
In EDK 7.1i, heavier but more precise GCC floating point libraries have been used in contrast to lighter but relatively imprecise ones in EDK 6.3i. As a consequence, in 7.1i the higher quality of the result is obtained at the expense of quicker performance. This has been significantly improved in EDK 8.1i.