Human-readable RTL code generation

Every now and then I meet somebody (compiler writers most of the time, but not only) who believes that generating human-readable RTL code is worthless. They claim that nobody should need to look at generated code, among other things because they should just trust the compiler, like software engineers do. It is time to examine the facts.

  1. The majority of the hardware engineers that we've met want to be able to understand the generated RTL, because they want to be able to reuse, verify, optimize or modify the generated code themselves. This includes people from big companies such as STMicroelectronics, Samsung, Renesas, ARM, as well as medium enterprises like Thomson Video Networks, RivieraWaves, ScaleoChip. Although this is only a subset of all hardware designers and semiconductor companies, I think we can consider it a reasonably representative sample. As a matter of fact, the rest of designers may not care whether the generated code is human-readable, but no designer has ever told us that he or she would prefer incomprehensible code, or even worse a netlist. Actually, that is probably because...

  2. RTL is a de facto standard for IP cores. Like it or not, unless you are buying IP cores for FPGAs, in which case you can (but don't have to) get a netlist, virtually every IP is available as RTL. In other words, forget it if your tool generates netlist. Now RTL might be low-level like assembly is low-level, but the comparison goes no further. Target-specific assembly languages are untyped, and have a well-defined syntax and semantics, whereas the languages in which cores are written (VHDL and Verilog) can be used in wildly different ways. So much that EDA tool vendors had to define coding styles that you must respect if you want the tool to do something useful out of your code. If your tool generates spaghetti code that is too low-level and does not respect a good coding style, linters (that check code will synthesize well) and synthesizers alike will print a ton of warnings. Is that really what you want your customers to see when they use your IP?

  3. RTL is not assembly. First of all, hardware and software are different. When you write software, the source code is where things happen, and what you spend most time reading/writing. From time to time you may need to look at assembly, and write some for optimization purposes, and that's it. In hardware, either (1) you write RTL yourself (so it is the source code), or (2) you generate it from a higher-level language like Cx or with HLS. Assembly is the last step in the source code to binary instructions path, whereas RTL is at the beginning of the source code to transistor path, which includes synthesis, P&R, and other steps further down the design flow. Virtually no tool in software will reference the assembly, but in hardware any information that is given to you by back-end tools references the RTL. Good luck if tools identify a problem in the generated RTL and it is not human-readable! :mrgreen:

  4. Synthesizing RTL to efficient hardware is hard. So much that synthesizers were once the leading products of the large EDA companies. Even now, an EDA company called Oasys Design Systems has come up with a new "chip synthesis" that optimizes at the RTL rather than at the gate level, and combines synthesis and P&R to reduce the whole process to hours instead of days. Translating assembly to binary is absolutely straightforward compared to that :-) The fact that there are still new companies innovating in RTL synthesis should be a strong indication that it makes much more sense to generate synthesis-friendly RTL than to synthesize it yourself. After all, as Oasys says in their whitepaper:

ESL tools all have an extremely coarse view of implementation trade-offs.

So there you have it, the main reasons why we generate human-readable, target-agnostic RTL 8-)