Why is there a difference between the C Model simulation and the core output for maximum input values to the decoder?
When just using the maximum input values for the tail bits, the C model and the decoder will differ in the bit output. The C model is correct. At all other times, the output of the core is correct. This minor issue will be fixed in a future release of the core.
Please see (Xilinx Answer 30630) for a detailed list of LogiCORE 3GPP LTE Turbo DecoderRelease Notes and Known Issues.