With very heavy CPU and ACP interconnect traffic, it might be possible to starve memory accesses by the OCM interconnect because the CPU read and write requests have higher arbitration priorities than requests coming through the OCM interconnect.
The memory traffic generated by the CPUs can be reduced by enabling the MMU/Cache mechanisms and setting the address range to Cacheable or Strongly Ordered.
Minor. The issue occurs under very extreme and artificially created usage scenarios. With the recommended work-around, the starvation scenario has never been re-created.
When the MMU is enabled, configure the OCM memory regions as Cacheable or Strongly Ordered memory types. Refer to the Work-around Details below.
Systems where the processor accesses the OCM.
|Device Revision(s) Affected:||Refer to (Xilinx Answer 47916) - Zynq-7000 Design Advisory Master Answer Record|
Certain MMU configurations are more likely to create these starvation scenarios, since they can produce a higher data throughput requirement on the CPU ports to the OCM.
It is recommended that when the MMU is enabled, configure the OCM memory regions as Cacheable or Strongly Ordered memory types. By using these configurations, no periods of starvation have been observed in any of the simulations.
Other MMU settings can lead to periods where the OCM Switch port experiences starvation.