Are Vivado results repeatable for identical tool inputs?
For the most part the answer is yes, Vivado should generate identical results between runs where the following are identical:
This applies to all parts of the design flow from HDL synthesis through to bitstream generation.
Results should be independent of
platform used and multithreading settings.
The known exception is cross-platform repeatability involving 32-bit Windows.
Results using 32-bit Windows are not guaranteed to be identical to other platforms.
However considering the multitude of platforms and variants
that support Vivado, in reality it is not possible to always guarantee that
results are repeatable with Vivado, even when all tool inputs are identical.
If you encounter non-repeatability under the previously described conditions, please open a WebCase.
Once the WebCase is open, the factory will begin its investigation and attempt to fix the problem.
Providing a testcase that demonstrates the problem will allow the factory to try to reproduce the non-repeatability and pinpoint the root cause, likely resulting in a successful fix.
If you cannot provide a testcase, it may still be possible to find the problem by inspecting Vivado log files of the differing runs, but a successful fix is not as likely.
Identifying repeatability issues
The most common symptoms of divergent runs are different post-route timing results.
If you suspect a case of non-repeatability and have verified that tool inputs are identical, you can further verify divergence using checksums.
Vivado reports a checksum in the log at each intermediate stage of each implementation command, a signature based on the design netlist and physical data.
Checksums can be compared between different runs and checksum mismatches help identify where results diverge.
For example: The checksums below begin to diverge and thereafter never converge.
If all inputs are identical then this is likely a repeatability issue.
Checkpoints vs. in-memory runs
Checkpoints should produce repeatable results compared to the equivalent in-memory design flow.
Consider the following command sequences, one running the entire design flow in memory, and the other re-entering the flow at phys_opt_design using the placed checkpoint from the in-memory flow.
These two runs should give identical results. Although checksums may differ due to netlist sorting differences after the open_checkpoint command, checksums should converge after the first implementation command which is phys_opt_design:
Each implementation command automatically sorts the netlist before running to ensure netlist consistency, so that results do not diverge due to netlist differences.
If repeatability is absolutely critical to your design environment, the following can help maximize repeatability:
Running in single-threaded mode.
When using different number of CPUs between runs, simultaneous threads may execute operations in different orders and cause diverging results, even when run using the same machine and OS.
To run in single-threaded mode use the following:
set_param general.maxThreads 1This disables multithreading.
Running on the same machine or running on machines using the same OS.
This reduces the chances of encountering non-repeatability due to the way core functions and system calls are implemented from one operating system to another, especially in the case of Windows vs. Linux.
The chances are further reduced by fine-tuning the environment to eliminate all of the following scenarios:
Eliminating differences in computing hardware:
* Note: Repeatability is not guaranteed between 32-bit Windows and other operating systems including 64-bit Windows.