Basic Principles — Yosys documentation (2024)

This chapter contains a short introduction to the basic principles ofdigital circuit synthesis.

Levels of Abstraction

Digital circuits can be represented at different levels of abstraction.During the design process a circuit is usually first specified using ahigher level abstraction. Implementation can then be understood asfinding a functionally equivalent representation at a lower abstractionlevel. When this is done automatically using software, the termsynthesis is used.

So synthesis is the automatic conversion of a high-level representationof a circuit to a functionally equivalent low-level representation of acircuit. Figure[fig:Basics_abstractions]lists the different levels of abstraction and how they relate todifferent kinds of synthesis.

Regardless of the way a lower level representation of a circuit isobtained (synthesis or manual design), the lower level representation isusually verified by comparing simulation results of the lower level andthe higher level representation 1. Therefore even if no synthesis isused, there must still be a simulatable representation of the circuit inall levels to allow for verification of the design.

Note: The exact meaning of terminology such as “High-Level” is of coursenot fixed over time. For example the HDL “ABEL” was first introduced in1985 as “A High-Level Design Language for Programmable Logic Devices”, but would not be considered a “High-LevelLanguage” today.

System Level

The System Level abstraction of a system only looks at its biggestbuilding blocks like CPUs and computing cores. At this level the circuitis usually described using traditional programming languages like C/C++or Matlab. Sometimes special software libraries are used that are aimedat simulation circuits on the system level, such as SystemC.

Usually no synthesis tools are used to automatically transform a systemlevel representation of a circuit to a lower-level representation. Butsystem level design tools exist that can be used to connect system levelbuilding blocks.

The IEEE 1685-2009 standard defines the IP-XACT file format that can beused to represent designs on the system level and building blocks thatcan be used in such system level designs.

High Level

The high-level abstraction of a system (sometimes referred to asalgorithmic level) is also often represented using traditionalprogramming languages, but with a reduced feature set. For example whenrepresenting a design at the high level abstraction in C, pointers canonly be used to mimic concepts that can be found in hardware, such asmemory interfaces. Full featured dynamic memory management is notallowed as it has no corresponding concept in digital circuits.

Tools exist to synthesize high level code (usually in the form ofC/C++/SystemC code with additional metadata) to behavioural HDL code(usually in the form of Verilog or VHDL code). Aside from the manycommercial tools for high level synthesis there are also a number ofFOSS tools for high level synthesis .

Behavioural Level

At the behavioural abstraction level a language aimed at hardwaredescription such as Verilog or VHDL is used to describe the circuit, butso-called behavioural modelling is used in at least part of thecircuit description. In behavioural modelling there must be a languagefeature that allows for imperative programming to be used to describedata paths and registers. This is the always-block in Verilog andthe process-block in VHDL.

In behavioural modelling, code fragments are provided together with asensitivity list; a list of signals and conditions. In simulation, thecode fragment is executed whenever a signal in the sensitivity listchanges its value or a condition in the sensitivity list is triggered. Asynthesis tool must be able to transfer this representation into anappropriate datapath followed by the appropriate types of register.

For example consider the following Verilog code fragment:

12
always @(posedge clk) y <= a + b;

In simulation the statement y <= a + b is executed whenever apositive edge on the signal clk is detected. The synthesis resulthowever will contain an adder that calculates the sum a + b all thetime, followed by a d-type flip-flop with the adder output on itsD-input and the signal y on its Q-output.

Usually the imperative code fragments used in behavioural modelling cancontain statements for conditional execution (if- andcase-statements in Verilog) as well as loops, as long as those loopscan be completely unrolled.

Interestingly there seems to be no other FOSS Tool that is capable ofperforming Verilog or VHDL behavioural syntheses besides Yosys (seeApp.[chapter:sota]).

Register-Transfer Level (RTL)

On the Register-Transfer Level the design is represented bycombinatorial data paths and registers (usually d-type flip flops). Thefollowing Verilog code fragment is equivalent to the previous Verilogexample, but is in RTL representation:

1234
assign tmp = a + b; // combinatorial data pathalways @(posedge clk) // register y <= tmp;

A design in RTL representation is usually stored using HDLs like Verilogand VHDL. But only a very limited subset of features is used, namelyminimalistic always-blocks (Verilog) or process-blocks (VHDL)that model the register type used and unconditional assignments for thedatapath logic. The use of HDLs on this level simplifies simulation asno additional tools are required to simulate a design in RTLrepresentation.

Many optimizations and analyses can be performed best at the RTL level.Examples include FSM detection and optimization, identification ofmemories or other larger building blocks and identification of shareableresources.

Note that RTL is the first abstraction level in which the circuit isrepresented as a graph of circuit elements (registers and combinatorialcells) and signals. Such a graph, when encoded as list of cells andconnections, is called a netlist.

RTL synthesis is easy as each circuit node element in the netlist cansimply be replaced with an equivalent gate-level circuit. However,usually the term RTL synthesis does not only refer to synthesizing anRTL netlist to a gate level netlist but also to performing a number ofhighly sophisticated optimizations within the RTL representation, suchas the examples listed above.

A number of FOSS tools exist that can perform isolated tasks within thedomain of RTL synthesis steps. But there seems to be no FOSS tool thatcovers a wide range of RTL synthesis operations.

Logical Gate Level

At the logical gate level the design is represented by a netlist thatuses only cells from a small number of single-bit cells, such as basiclogic gates (AND, OR, NOT, XOR, etc.) and registers (usually D-TypeFlip-flops).

A number of netlist formats exists that can be used on this level,e.g.the Electronic Design Interchange Format (EDIF), but for ease ofsimulation often a HDL netlist is used. The latter is a HDL file(Verilog or VHDL) that only uses the most basic language constructs forinstantiation and connecting of cells.

There are two challenges in logic synthesis: First finding opportunitiesfor optimizations within the gate level netlist and second the optimal(or at least good) mapping of the logic gate netlist to an equivalentnetlist of physically available gate types.

The simplest approach to logic synthesis is two-level logic synthesis,where a logic function is converted into a sum-of-productsrepresentation, e.g.using a Karnaugh map. This is a simple approach,but has exponential worst-case effort and cannot make efficient use ofphysical gates other than AND/NAND-, OR/NOR- and NOT-Gates.

Therefore modern logic synthesis tools utilize much more complicatedmulti-level logic synthesis algorithms. Most of these algorithmsconvert the logic function to a Binary-Decision-Diagram (BDD) orAnd-Inverter-Graph (AIG) and work from that representation. The formerhas the advantage that it has a unique normalized form. The latter hasmuch better worst case performance and is therefore better suited forthe synthesis of large logic functions.

Good FOSS tools exists for multi-level logic synthesis .

Yosys contains basic logic synthesis functionality but can also use ABCfor the logic synthesis step. Using ABC is recommended.

Physical Gate Level

On the physical gate level only gates are used that are physicallyavailable on the target architecture. In some cases this may only beNAND, NOR and NOT gates as well as D-Type registers. In other cases thismight include cells that are more complex than the cells used at thelogical gate level (e.g.complete half-adders). In the case of anFPGA-based design the physical gate level representation is a netlist ofLUTs with optional output registers, as these are the basic buildingblocks of FPGA logic cells.

For the synthesis tool chain this abstraction is usually the lowestlevel. In case of an ASIC-based design the cell library might containfurther information on how the physical cells map to individual switches(transistors).

Switch Level

A switch level representation of a circuit is a netlist utilizing singletransistors as cells. Switch level modelling is possible in Verilog andVHDL, but is seldom used in modern designs, as in modern digital ASIC orFPGA flows the physical gates are considered the atomic build blocks ofthe logic circuit.

Yosys

Yosys is a Verilog HDL synthesis tool. This means that it takes abehavioural design description as input and generates an RTL, logicalgate or physical gate level description of the design as output. Yosys’main strengths are behavioural and RTL synthesis. A wide range ofcommands (synthesis passes) exist within Yosys that can be used toperform a wide range of synthesis tasks within the domain ofbehavioural, rtl and logic synthesis. Yosys is designed to be extensibleand therefore is a good basis for implementing custom synthesis toolsfor specialised tasks.

Features of Synthesizable Verilog

The subset of Verilog that issynthesizable is specified in a separate IEEE standards document, theIEEE standard 1364.1-2002 . Thisstandard also describes how certain language constructs are to beinterpreted in the scope of synthesis.

This section provides a quick overview of the most important features ofsynthesizable Verilog, structured in order of increasing complexity.

Structural Verilog

Structural Verilog (also known as Verilog Netlists) is a Netlist inVerilog syntax. Only the following language constructs are used in thiscase:

  • Constant values

  • Wire and port declarations

  • Static assignments of signals to other signals

  • Cell instantiations

Many tools (especially at the back end of the synthesis chain) onlysupport structural Verilog as input. ABC is an example of such a tool.Unfortunately there is no standard specifying what Structural Verilogactually is, leading to some confusion about what syntax constructs aresupported in structural Verilog when it comes to features such asattributes or multi-bit signals.

Expressions in Verilog

In all situations where Verilog accepts a constant value or signal name,expressions using arithmetic operations such as +, - and *,boolean operations such as & (AND), | (OR) and ^ (XOR) andmany others (comparison operations, unary operator, etc.) can also beused.

During synthesis these operators are replaced by cells that implementthe respective function.

Many FOSS tools that claim to be able to process Verilog in fact onlysupport basic structural Verilog and simple expressions. Yosys can beused to convert full featured synthesizable Verilog to this simplersubset, thus enabling such applications to be used with a richer set ofVerilog features.

Behavioural Modelling

Code that utilizes the Verilog always statement is usingBehavioural Modelling. In behavioural modelling, a circuit isdescribed by means of imperative program code that is executed oncertain events, namely any change, a rising edge, or a falling edge of asignal. This is a very flexible construct during simulation but is onlysynthesizable when one of the following is modelled:

  • Asynchronous or latched logic

    In this case the sensitivity list must contain all expressions thatare used within the always block. The syntax @* can be usedfor these cases. Examples of this kind include:

     1 2 3 4 5 6 7 8 910111213
    // asynchronousalways @* begin if (add_mode) y <= a + b; else y <= a - b;end// latchedalways @* begin if (!hold) y <= a + b;end

    Note that latched logic is often considered bad style and in manycases just the result of sloppy HDL design. Therefore many synthesistools generate warnings whenever latched logic is generated.

  • Synchronous logic (with optional synchronous reset)

    This is logic with d-type flip-flops on the output. In this casethe sensitivity list must only contain the respective clock edge.Example:

    1234567
    // counter with synchronous resetalways @(posedge clk) begin if (reset) y <= 0; else y <= y + 1;end
  • Synchronous logic with asynchronous reset

    This is logic with d-type flip-flops with asynchronous resets onthe output. In this case the sensitivity list must only contain therespective clock and reset edges. The values assigned in the resetbranch must be constant. Example:

    1234567
    // counter with asynchronous resetalways @(posedge clk, posedge reset) begin if (reset) y <= 0; else y <= y + 1;end

Many synthesis tools support a wider subset of flip-flops that can bemodelled using always-statements (including Yosys). But only theones listed above are covered by the Verilog synthesis standard and whenwriting new designs one should limit herself or himself to these cases.

In behavioural modelling, blocking assignments (=) and non-blockingassignments (<=) can be used. The concept of blocking vs.non-blockingassignment is one of the most misunderstood constructs in Verilog.

The blocking assignment behaves exactly like an assignment in anyimperative programming language, while with the non-blocking assignmentthe right hand side of the assignment is evaluated immediately but theactual update of the left hand side register is delayed until the end ofthe time-step. For example the Verilog code a <= b; b <= a;exchanges the values of the two registers. SeeSec.[sec:blocking_nonblocking] for amore detailed description of this behaviour.

Functions and Tasks

Verilog supports Functions and Tasks to bundle statements that areused in multiple places (similar to Procedures in imperativeprogramming). Both constructs can be implemented easily by substitutingthe function/task-call with the body of the function or task.

Conditionals, Loops and Generate-Statements

Verilog supports if-else-statements and for-loops insidealways-statements.

It also supports both features in generate-statements on the modulelevel. This can be used to selectively enable or disable parts of themodule based on the module parameters (if-else) or to generate a setof similar subcircuits (for).

While the if-else-statement inside an always-block is part ofbehavioural modelling, the three other cases are (at least for asynthesis tool) part of a built-in macro processor. Therefore it must bepossible for the synthesis tool to completely unroll all loops andevaluate the condition in all if-else-statement ingenerate-statements using const-folding.

Examples for this can be found inFig.[fig:StateOfTheArt_for] andFig.[fig:StateOfTheArt_gen] inApp.[chapter:sota].

Arrays and Memories

Verilog supports arrays. This is in general a synthesizable languagefeature. In most cases arrays can be synthesized by generatingaddressable memories. However, when complex or asynchronous accesspatterns are used, it is not possible to model an array as memory. Inthese cases the array must be modelled using individual signals for eachword and all accesses to the array must be implemented using largemultiplexers.

In some cases it would be possible to model an array using memories, butit is not desired. Consider the following delay circuit:

 1 2 3 4 5 6 7 8 91011121314151617181920
module (clk, in_data, out_data);parameter BITS = 8;parameter STAGES = 4;input clk;input [BITS-1:0] in_data;output [BITS-1:0] out_data;reg [BITS-1:0] ffs [STAGES-1:0];integer i;always @(posedge clk) begin ffs[0] <= in_data; for (i = 1; i < STAGES; i = i+1) ffs[i] <= ffs[i-1];endassign out_data = ffs[STAGES-1];endmodule

This could be implemented using an addressable memory with STAGESinput and output ports. A better implementation would be to use a simplechain of flip-flops (a so-called shift register). This betterimplementation can either be obtained by first creating a memory-basedimplementation and then optimizing it based on the static addresssignals for all ports or directly identifying such situations in thelanguage front end and converting all memory accesses to direct accessesto the correct signals.

Challenges in Digital Circuit Synthesis

This section summarizes the most important challenges in digital circuitsynthesis. Tools can be characterized by how well they address thesetopics.

Standards Compliance

The most important challenge is compliance with the HDL standards inquestion (in case of Verilog the IEEE Standards 1364.1-2002 and1364-2005). This can be broken down in two items:

  • Completeness of implementation of the standard

  • Correctness of implementation of the standard

Completeness is mostly important to guarantee compatibility withexisting HDL code. Once a design has been verified and tested, HDLdesigners are very reluctant regarding changes to the design, even if itis only about a few minor changes to work around a missing feature in anew synthesis tool.

Correctness is crucial. In some areas this is obvious (such as correctsynthesis of basic behavioural models). But it is also crucial for theareas that concern minor details of the standard, such as the exactrules for handling signed expressions, even when the HDL code does nottarget different synthesis tools. This is because (unlike softwaresource code that is only processed by compilers), in most design flowsHDL code is not only processed by the synthesis tool but also by one ormore simulators and sometimes even a formal verification tool. It is keyfor this verification process that all these tools use the sameinterpretation for the HDL code.

Optimizations

Generally it is hard to give a one-dimensional description of how well asynthesis tool optimizes the design. First of all because not alloptimizations are applicable to all designs and all synthesis tasks.Some optimizations work (best) on a coarse-grained level (with complexcells such as adders or multipliers) and others work (best) on afine-grained level (single bit gates). Some optimizations target areaand others target speed. Some work well on large designs while othersdon’t scale well and can only be applied to small designs.

A good tool is capable of applying a wide range of optimizations atdifferent levels of abstraction and gives the designer control overwhich optimizations are performed (or skipped) and what the optimizationgoals are.

Technology Mapping

Technology mapping is the process of converting the design into anetlist of cells that are available in the target architecture. In anASIC flow this might be the process-specific cell library provided bythe fab. In an FPGA flow this might be LUT cells as well as specialfunction units such as dedicated multipliers. In a coarse-grain flowthis might even be more complex special function units.

An open and vendor independent tool is especially of interest if itsupports a wide range of different types of target architectures.

Script-Based Synthesis Flows

A digital design is usually started by implementing a high-level orsystem-level simulation of the desired function. This description isthen manually transformed (or re-implemented) into a synthesizablelower-level description (usually at the behavioural level) and theequivalence of the two representations is verified by simulating bothand comparing the simulation results.

Then the synthesizable description is transformed to lower-levelrepresentations using a series of tools and the results are againverified using simulation. This process is illustrated inFig.[fig:Basics_flow].

In this example the System Level Model and the Behavioural Model areboth manually written design files. After the equivalence of systemlevel model and behavioural model has been verified, the lower levelrepresentations of the design can be generated using synthesis tools.Finally the RTL Model and the Gate-Level Model are verified and thedesign process is finished.

However, in any real-world design effort there will be multipleiterations for this design process. The reason for this can be the latechange of a design requirement or the fact that the analysis of alow-abstraction model (e.g.gate-level timing analysis) revealed that adesign change is required in order to meet the design requirements(e.g.maximum possible clock speed).

Whenever the behavioural model or the system level model is changedtheir equivalence must be re-verified by re-running the simulations andcomparing the results. Whenever the behavioural model is changed thesynthesis must be re-run and the synthesis results must be re-verified.

In order to guarantee reproducibility it is important to be able tore-run all automatic steps in a design project with a fixed set ofsettings easily. Because of this, usually all programs used in asynthesis flow can be controlled using scripts. This means that allfunctions are available via text commands. When such a tool provides aGUI, this is complementary to, and not instead of, a command lineinterface.

Usually a synthesis flow in an UNIX/Linux environment would becontrolled by a shell script that calls all required tools (synthesisand simulation/verification in this example) in the correct order. Eachof these tools would be called with a script file containing commandsfor the respective tool. All settings required for the tool would beprovided by these script files so that no manual interaction would benecessary. These script files are considered design sources and shouldbe kept under version control just like the source code of the systemlevel and the behavioural model.

Methods from Compiler Design

Some parts of synthesis tools involve problem domains that aretraditionally known from compiler design. This section addresses some ofthese domains.

Lexing and Parsing

The best known concepts from compiler design are probably lexing andparsing. These are two methods that together can be used to processcomplex computer languages easily.

A lexer consumes single characters from the input and generates astream of lexical tokens that consist of a type and a value. Forexample the Verilog input “assign foo = bar + 42;” might betranslated by the lexer to the list of lexical tokens given inTab.1.1.

assign foo = bar + 42;”.

Token-Type

Token-Value

TOK_ASSIGN

TOK_IDENTIFIER

foo

TOK_EQ

TOK_IDENTIFIER

bar

TOK_PLUS

TOK_NUMBER

42

TOK_SEMICOLON

The lexer is usually generated by a lexer generator (e.g.flex )from a description file that is using regular expressions to specify thetext pattern that should match the individual tokens.

The lexer is also responsible for skipping ignored characters (such aswhitespace outside string constants and comments in the case of Verilog)and converting the original text snippet to a token value.

Note that individual keywords use different token types (instead of akeyword type with different token values). This is because the parserusually can only use the Token-Type to make a decision on thegrammatical role of a token.

The parser then transforms the list of tokens into a parse tree thatclosely resembles the productions from the computer languages grammar.As the lexer, the parser is also typically generated by a code generator(e.g.bison ) from a grammar description in Backus-Naur Form (BNF).

Let’s consider the following BNF (in Bison syntax):

assign_stmt: TOK_ASSIGN TOK_IDENTIFIER TOK_EQ expr TOK_SEMICOLON;expr: TOK_IDENTIFIER | TOK_NUMBER | expr TOK_PLUS expr;

The parser converts the token list to the parse tree inFig.[fig:Basics_parsetree]. Note that theparse tree never actually exists as a whole as data structure in memory.Instead the parser calls user-specified code snippets (so-calledreduce-functions) for all inner nodes of the parse tree in depth-firstorder.

In some very simple applications (e.g.code generation for stackmachines) it is possible to perform the task at hand directly in thereduce functions. But usually the reduce functions are only used tobuild an in-memory data structure with the relevant information from theparse tree. This data structure is called an abstract syntax tree(AST).

The exact format for the abstract syntax tree is application specific(while the format of the parse tree and token list are mostly dictatedby the grammar of the language at hand).Figure[fig:Basics_ast] illustrates what an ASTfor the parse tree inFig.[fig:Basics_parsetree] could look like.

Usually the AST is then converted into yet another representation thatis more suitable for further processing. In compilers this is often anassembler-like three-address-code intermediate representation.

Multi-Pass Compilation

Complex problems are often best solved when split up into smallerproblems. This is certainly true for compilers as well as for synthesistools. The components responsible for solving the smaller problems canbe connected in two different ways: through Single-Pass Pipelining andby using Multiple Passes.

Traditionally a parser and lexer are connected using the pipelinedapproach: The lexer provides a function that is called by the parser.This function reads data from the input until a complete lexical tokenhas been read. Then this token is returned to the parser. So the lexerdoes not first generate a complete list of lexical tokens and then passit to the parser. Instead they run concurrently and the parser canconsume tokens as the lexer produces them.

The single-pass pipelining approach has the advantage of lower memoryfootprint (at no time must the complete design be kept in memory) buthas the disadvantage of tighter coupling between the interactingcomponents.

Therefore single-pass pipelining should only be used when the lowermemory footprint is required or the components are also conceptuallytightly coupled. The latter certainly is the case for a parser and itslexer. But when data is passed between two conceptually loosely coupledcomponents it is often beneficial to use a multi-pass approach.

In the multi-pass approach the first component processes all the dataand the result is stored in a in-memory data structure. Then the secondcomponent is called with this data. This reduces complexity, as only onecomponent is running at a time. It also improves flexibility ascomponents can be exchanged easier.

Most modern compilers are multi-pass compilers.

Static Single Assignment Form

In imperative programming (and behavioural HDL design) it is possible toassign the same variable multiple times. This can either mean that thevariable is independently used in two different contexts or that thefinal value of the variable depends on a condition.

The following examples show C code in which one variable is usedindependently in two different contexts:

12345678
void demo1(){ int a = 1; printf("%d\n", a); a = 2; printf("%d\n", a);}
void demo1(){ int a = 1; printf("%d\n", a); int b = 2; printf("%d\n", b);}
 1 2 3 4 5 6 7 8 91011
void demo2(bool foo){ int a; if (foo) { a = 23; printf("%d\n", a); } else { a = 42; printf("%d\n", a); }}
void demo2(bool foo){ int a, b; if (foo) { a = 23; printf("%d\n", a); } else { b = 42; printf("%d\n", b); }}

In both examples the left version (only variable a) and the rightversion (variables a and b) are equivalent. Therefore it isdesired for further processing to bring the code in an equivalent formfor both cases.

In the following example the variable is assigned twice but it cannot beeasily replaced by two variables:

void demo3(bool foo){ int a = 23 if (foo) a = 42; printf("%d\n", a);}

Static single assignment (SSA) form is a representation of imperativecode that uses identical representations for the left and right versionof demos 1 and 2, but can still represent demo 3. In SSA form eachassignment assigns a new variable (usually written with an index). Butit also introduces a special \Phi-function to merge thedifferent instances of a variable when needed. In C-pseudo-code the demo3 would be written as follows using SSA from:

void demo3(bool foo){ int a_1, a_2, a_3; a_1 = 23 if (foo) a_2 = 42; a_3 = phi(a_1, a_2); printf("%d\n", a_3);}

The \Phi-function is usually interpreted as “these variablesmust be stored in the same memory location” during code generation. Mostmodern compilers for imperative languages such as C/C++ use SSA form forat least some of its passes as it is very easy to manipulate andanalyse.

1

In recent years formal equivalence checking also became an importantverification method for validating RTL and lower abstractionrepresentation of the design.

Basic Principles — Yosys  documentation (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: The Hon. Margery Christiansen

Last Updated:

Views: 5473

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: The Hon. Margery Christiansen

Birthday: 2000-07-07

Address: 5050 Breitenberg Knoll, New Robert, MI 45409

Phone: +2556892639372

Job: Investor Mining Engineer

Hobby: Sketching, Cosplaying, Glassblowing, Genealogy, Crocheting, Archery, Skateboarding

Introduction: My name is The Hon. Margery Christiansen, I am a bright, adorable, precious, inexpensive, gorgeous, comfortable, happy person who loves writing and wants to share my knowledge and understanding with you.