Meet us at the RISC-V Summit in San Jose, CA, on December 13-14, 2022! 

Why Codasip Cares About Processor Verification – and Why you Should too


February 28, 2022

Finding a hardware bug in silicon has consequences. The severity of these consequences for the end user can depend on the use case. For the product manufacturer, fixing a bug once a design is in mass-production can incur a significant cost. Investing in processor verification is therefore fundamental to ensure quality. This is something we care passionately about at Codasip, here is why you should too.

Luckily for the semiconductor industry, there are statistically more bugs in software than in hardware, and in processors in particular. However, software can easily be upgraded over the air, directly in the end-products used by consumers. With hardware, on the other hand, this is not as straightforward  and a hardware issue can have severe consequences. The quality of our deliverables, which will end up in real silicon, seriously matters.

Processors all have different quality requirements

Processors are ubiquitous. They control the flash memory in your laptop, the braking system of your car or the chip on your credit card. These CPUs have different performance requirements but also different security and safety requirements. In other words, different quality requirements.

Is it a major issue if the Wi-Fi chip in your laptop is missing a few frames? The Wi-Fi protocol retransmits the packet and it goes largely unnoticed. If your laptop’s SSD controller drops a few packets and corrupts the document you have been working on all day It will be a serious disruption to your work, there may be some shouting, but you will recover. It’s a bug that you might be able to accept.

Other hardware failures have much more severe consequences: What if your car’s braking system fails because of a hardware issue? Or the fly-by-wire communication in a plane fails? Or what if a satellite falls to earth because its orbit control fails? Some bugs and hardware failures are simply not acceptable.

Processor quality and therefore its reliability is the main concern of processor verification teams. And processor verification is a subtle art.

The subtle art of processor verification

Processor verification requires strategy, diligence and completeness.

Verifying a processor means taking uncertainty into account. What software will run on the end product? What will be the use cases? What asynchronous events could occur? These unknowns mean significantly opening the verification scope. However, it is impossible to cover the entire processor state space, and it is not something to aim for.

Processor quality must be ensured while making the best use of time and resources. At the end of the day, the ROI must be positive. Nobody wants to find costly bugs after the product release, and nobody wants to delay a project because of an inefficient verification strategy. Doing smart processor verification means finding relevant bugs efficiently and as early as possible in the product development.

In other words, processor verification must:

  • Be done during processor development, in collaboration with design teams. Verifying a processor design once it is finalized is not enough. Verification must drive the design, to some extent.
  • Be a combination of all industry standard techniques. There are different types of bugs of different levels of complexity that you might find using random testing, code coverage, formal proofs, power-aware simulation, etc. Using multiple techniques allows you to maximize the potential of each of them (what we could also call the Swiss cheese model) and efficiently cover as many corner cases as possible.
  • It is an ongoing science. What works is a combination of techniques that evolve as processors become more complex. We develop different approaches as we learn from previous bugs and designs to refine our verification methodology and offer best-in-class quality IP.

Processor quality is fundamental. The art of verifying a processor is a subtle one that is evolving as the industry is changing and new requirements arise. At Codasip, we put in place verification methodologies that allow us to deliver high-quality RISC-V customizable processors. With Codasip Studio and associated tools, we provide our customers with the best technology that helps them follow up and verify their specific processor customization.

Philippe Luc

Philippe Luc

Related Posts

Check out the latest news, posts, papers, videos, and more!

Building a Swiss cheese model approach for processor verification

April 29, 2022
By Philippe Luc

Measuring the complexity of processor bugs to improve testbench quality

April 4, 2022
By Philippe Luc

Improve Your Verification Methodology: Hunt Bugs Flying in Squadrons

March 14, 2022
By Philippe Luc

Why and How to Customize a Processor


October 1, 2021

Processor customization is one approach to optimizing a processor IP core to handle a specific workload. The idea is to take an existing core that could partially meet your requirements and use it as a starting point for your optimized processor. Now, why and how to customize a processor?

Why you should consider creating a custom core?

Before we start, let’s make sure we are all on the same page. Processor configuration and processor customization are two different things. Configuring a processor means setting the options made available by your IP vendor (cache size, MMU support, etc.). Customizing a processor means adding or changing something that requires more invasive changes such as changing the ISA, writing new instructions. In this blog post we focus on processor customization.

Customizing an existing processor is particularly relevant when you create a product that must be performant, area efficient, and energy efficient at the same time. Whether you are designing a processor for an autonomous vehicle that requires both vector instructions and low-power features, or a processor for computational storage with real-time requirements and power and area constraints, you need an optimized and specialized core.

Processor customization allows you to bring in a single processor IP all the architecture extensions you need, standard or custom, that would have been available in either multiple IPs on the SoC or in one big, energy intensive IP. Optimizing an existing processor for your unique needs has significant advantages:

  • It allows you to save area and optimize power and performance as it targets exactly what you need.
  • It is ready to use. The processor design has already been verified – you only have to verify your custom extensions.
  • You design for differentiation. Owning the changes, you create a unique product.

Now, one may think that this is not so easy. How reliable is the verification of a custom processor? Differentiation is becoming more difficult, time consuming, and sometimes more expensive. The success of processor customization relies on two things:

  • An open-source ISA such as RISC-V.
  • Design and verification automation.

Custom processor, ASIP, Domain-Specific Accelerator, Hardware Accelerator, Application-Specific processor… are all terms related to processor customization.

Customizing a RISC-V processor

Remember: the RISC-V Instruction Set Architecture (ISA) was created with customization in mind. If you want to create a custom processor, starting from an existing RISC-V processor is ideal.

You can add optional standard extensions and non-standard custom extensions on top of the base instruction set to tailor your processor for a given application.

RISC-V Modular Instruction Set. Source: Codasip.

For a robust customization process that ensures quality in the design and confidence in the verification, automation is key.
With Codasip you can license RISC-V processors:

  • In the usual way (RTL, testbench, SDK).
  • In the CodAL source code.

CodAL is used to design Codasip RISC-V processors and generate the SDK and HDK. You can then edit the CodAL source code to create your own custom extensions and modify other architectural features as needed.

Microsemi opted for this approach as they wanted to replace a proprietary embedded core with a RISC-V one. Check this great processor customization use case with Codasip IP and technology!

Processor design automation with Codasip Studio

The legacy approach to adding new instructions to a core is based on manual editing. Adding custom instructions must be reflected in the following areas:

  • Software toolchain.
  • Instruction set simulator.
  • Verification environment.
  • RTL.
Design Automation without Codasip Studio. Source: Codasip.

With the software toolchain, intrinsics can be created so that the new instructions are used by the compiler, but this also means that the application code needs updating. However, modifying the existing ISS and RTL are both potential sources of errors. Lastly, if the verification environment needs changing, this is a further area for problems. Verifying these manual changes is a big challenge and adds risk to the design project.

Some vendors offer partially automated solutions, but by not covering all aspects of processor customization they still leave room for error due to the manual changes.

Design Automation with Codasip Studio. Source: Codasip.

In contrast, with Codasip the changes are only made to the CodAL source code. The LLVM toolchain is automatically generated with support for the new instructions. Similarly, the ISS and RTL are generated to include the custom instructions and can be checked using the updated UVM environment. This approach not only saves time, but is a more robust customization process.

Create an application-specific processor with RISC-V and Codasip

As differentiation is becoming more difficult, time consuming and sometimes more expensive with traditional processor design, customizing a processor so that it will meet your unique requirements is the key. Creating an application-specific processor efficiently, without compromising PPA, requires an open-source architecture and tools to automate the design and verification process. Find out more in our white paper on “Creating Domain-Specific Processors using custom RISC-V ISA instructions”.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

Processor design automation to drive innovation and foster differentiation

July 7, 2022
By Lauranne Choquin

Design for differentiation: architecture licenses in RISC‑V

May 16, 2022
By Rupert Baines

Building the highway to automotive innovation

May 5, 2022
By Jamie Broome

Understanding the Performance of Processor IP Cores


August 20, 2020

Looking at any processor IP, you will find that their vendors emphasize PPA (performance, power & area) numbers. In theory, they should provide a level playing field for comparing different processor IP cores, but in reality, the situation is more complex. Let us consider processor performance.

What does processor performance mean?

The first thing to think about is what aspect of performance you care about. Do you care more about the absolute throughput that you want (performance per second), or the performance per MHz? In an application such as machine vision, which is continuously running and requiring the use of complex algorithms, it is likely that you will care about the absolute throughput. However, if you have a wireless sensor node with a low duty cycle, when the node wakes up, you will want it to be active for as few clock cycles as possible. This means you will care about how much computation you achieve per MHz.

About 40 years ago, computers were compared on the basis of MIPS (millions of instructions per second) although the problem is – what is an instruction? Instructions vary considerably in complexity and from one architecture to another, thus an operation will generally require less cycles in a CISC processor than a RISC one. MIPS were only helpful when comparing products with similar architectures and were called “meaningless indices of performance” by some!

Another thing to think about is the type of computation that you expect to care most about. Is it integer operations – and if so, which ones – or, say, floating-point computations? In the past, MFLOPS (million floating point operations per second) was a popular measure. But again, what is an ‘operation’?

Popular synthetic benchmarks

Today, synthetic benchmarks are universally used with processor IP cores. They have the following characteristics:

  • They are relatively small and portable.
  • They are representative of commonly used relevant applications.
  • They are reproducible and transparent.
  • They can be applied to a range of processors fairly.
  • They express the benchmark result as a single number.

Dhrystone

A benchmark that has been popular for the last 36 years is the Dhrystone benchmark. Its name is a play on words comparing it with the once-popular Whetstone benchmark. While Whetstone focused on floating point operations, Dhrystone focused on integer and string operations. The Dhrystone benchmark results are generally quoted as DMIPS (the Dhrystone score divided by that of a nominally 1 MIPS machine). The benchmark has been criticized because modern compilers can optimize away parts of the work, meaning that it partly tests compiler rather than processor performance.

For floating point, Whetstone is rarely used at present and it is more likely that LINPACK would be used. LINPACK involves LU decomposition of a matrix using floating point numbers. The result is expressed in MFLOPS.

CoreMark

Another popular synthetic benchmark for embedded applications has been EEMBC’s CoreMark® which aims to undertake operations that are representative of embedded integer processing needs. These include list processing, matrix operations, finite state machines, and CRC.

Find more details and some tips to measure processor performance according to your needs in this video!

Assessing performance when choosing a processor

There are various benchmark systems out there, each suited for measuring a slightly different type of performance. So how do you assess performance when choosing processor IP for your project?

If your embedded software has similar operations to a synthetic benchmark, then that benchmark may give you useful initial guidance quickly and simply. However, normally such benchmarks are quoted per MHz, for example CoreMark/MHz. The per MHz figure is normally a good indication for a low-power application where you are looking for good results per cycle. However, if you are looking for high absolute performance, this may be misleading. Instead you should consider, say, the CoreMarks achievable at your target clock frequency.

If your main issue is floating-point performance, bear in mind that DMIPS and CoreMark are integer benchmarks. You would be better comparing cores on the basis of a floating-point benchmark such as LINPACK.

Ultimately, it always makes sense to invest the time in running realistic software on a processor core to assess whether the core gives you the performance you need. If you are looking at RISC-V, then profiling your software to understand where the computational bottlenecks are can also lead to assessing whether adding custom instructions can give you improvements in performance.

It is not just about processor performance and scores

In this article we have looked at processor performance, but that is only one aspect of PPA and one factor to consider when choosing a processor. PPA numbers are always about balance and all of them matter when choosing an IP for a project, among other key considerations. The ISA, processor complexity, processor memory or even the licensing model will impact your choice. Find out more in our white paper “What you should consider when choosing a processor IP core“.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

How to reduce the risk when making the shift to RISC-V

November 17, 2022
By Lauranne Choquin

DAC 2022 – Is it too risky not to adopt RISC-V?

July 18, 2022
By Brett Cline

Processor design automation to drive innovation and foster differentiation

July 7, 2022
By Lauranne Choquin