We’re exhibiting at Embedded World 2023. Book a meeting with us.

Creating specialized architectures with design automation


August 29, 2022

With semiconductor scaling slowing down if not failing, SoC designers are challenged to find ways of meeting the demand for greater computational performance. In their 2018 Turing lecture, Hennessey & Patterson pointed out that new methods are needed to work around failing scaling and predicted ‘A Golden Age for Computer Architecture’. A key approach in addressing this challenge is to innovate architecturally and to create more specialized processing units – domain-specific processors and accelerators.

If you haven’t already, I recommend reading our white paper on semiconductor scaling and what is next for processors.

AUTOMATION FOR CREATING APPLICATION-SPECIFIC PROCESSORS

Specialized processor cores are, by definition, going to vary a lot depending on their workload. Some may be readily developed by customizing existing RISC-V processor cores. However, that approach will not work in every instance and sometimes developing a novel architecture may be necessary. In such cases it will be necessary to explore the instruction set architecture (ISA) and microarchitecture to find a good design solution.

Traditionally custom cores were developed by manually creating an instruction set simulator (ISS), software toolchain, and RTL. This process can be time-consuming and error prone. The alternative is to describe the processor core in a high-level language and to use processor design automation to generate the ISS, software toolchain, RTL and verification environment.

This is exactly what Codasip offers. Codasip Studio is our unique processor design automation toolset. It has been applied to RISC, DSP, and VLIW designs, and we use it for developing our own RISC-V cores. Processors are described using the CodAL architectural language which covers both instruction accurate (IA) and cycle accurate (CA) descriptions.

EXPLORING THE INSTRUCTION SET WITH CODASIP STUDIO

The first stage in developing a core optimized to a particular application is to define the instruction set. If a RISC-V standard wordlength such as 32-bits or 64-bits is acceptable, then the corresponding base integer instruction set can be a good starting point to save time. However, if a different wordlength such as 8- or 16-bits is chosen then RISC-V cannot be directly used. Instead instructions based on the smaller wordlength should be developed, such as Google did when developing their 8-bit tensor processing unit (TPU).

Codasip CodAL

Once an initial instruction set is defined, an iterative exploration phase with the SDK in the loop can be undertaken. Rather than using an open-source toolchain or instruction set simulator such as GCC, LLVM or Spike, we recommend describing the instruction set, resources and semantics using our CodAL processor description language. Once the instruction set is modelled, the software toolchain (or SDK) can be automatically generated using the Codasip Studio toolset. The SDK can be used to profile real application software and any hotspots in the software can be identified.

SDK in the loop

Once hotspots are known, the instruction set can be adjusted to better meet the needs of the computational workload. The updated description in CodAL can be used to generate an SDK for further profiling. This can be repeated through further iterations until the ISA is finalized.

DEFINING THE MICROARCHITECTURE WITH CODASIP STUDIO

With a stable instruction set, the next phase is to develop the microarchitecture using CodAL. Decisions on forms of parallelism (such as multi-issue) or pipeline length or privilege modes need to be taken. This leads to a new iterative phase with the HDK in the loop.

HDK in the loop

CodAL is used to define a cycle accurate description of the core using the existing instruction set and resources. This in turn can be used to generate a CA simulator, RTL and testbench. The performance of the design can be assessed using the CA simulator, RTL simulation or an FPGA prototype. The RTL can be analyzed for silicon area and timing. If the microarchitecture is not meeting its targets, it can be modified and the HDK regenerated.

The final and essential stage is to verify the final RTL against an IA golden reference. Codasip Studio can generate the UVM verification environment for this. Additionally, Codasip Studio has tools to help with functional coverage and to generate random assembler programs. Users should also use their own directed tests and apply multiple verification strategies.

Roddy Urquhart

Roddy Urquhart

Embedded World 2022 – the RISC-V genie is out of the bottle


June 28, 2022

The last Embedded World was back in February of 2020, but the event was hit hard by Covid-19 with many exhibitors and visitors deciding to pull out last minute. No-one knew then that it would take almost two and half years before the embedded industry would regroup again in Nuremberg. Even now, in June 2022, a lot of people are still hesitant to travel and the volume of visitors in the halls was half this time compared to the glory days of 2018 or 2019. However, compared to December 2021 and the RISC-V Summit in California, this time there were no empty aisles and there was a steady flow of visitors walking the halls. Embedded World 2022 was an important conference for us and the RISC-V community.

Everyone knows about RISC-V and Codasip is no longer a well-kept secret

You know when you meet children that you have not seen in a few years and cannot get over how much they have grown? Well, probably some people had that same experience with RISC-V this time around at Embedded World 2022. Because it was evident that RISC-V is all grown up now!

Thinking back to my second week at Codasip in 2017, I was at our stand at Embedded World. We had displays that spoke of RISC-V and processor design automation, but our exhibit caused raised eyebrows. The typical passer-by simply said, “What is RISC-V?” or “who are you?”.

In contrast, by this year, everyone knows about RISC-V, students and large and small corporations alike. We were also a strong contributor of presentations to the RISC-V Pavilion too. And a good thing for us here at Codasip is that most people seem to also know us as a leading RISC-V IP provider.

Our-team-gave-three-talks-at-the-RISC-V-pavilion-during-the-conference
Our-team-gave-three-talks-at-the-RISC-V-pavilion-during-the-conference

Codasip is also no longer a well-kept secret, and we were well sought out by customers, partners and the media. Whether it’s our technology, or because of all the recruitment activities, it’s evident we’ve created a lot of interest. (And yes, we are still hiring, check out our open positions.)

If someone wasn’t sure what Codasip does, they most probably do after this week. Our customizable, low-power L31 embedded processor was awarded an Embedded World Best in Show by Embedded Computing Design magazine and the strong team on the Codasip stand was demonstrating our capabilities in delivering low-power embedded AI. In addition, we announced Apple macOS support in Codasip Studio, plus secure boot for Codasip processors, in collaboration with Veridify, and our friends at XtremeEDA and Crypto Quantique announced secure deployment using Codasip IP.

Brett Cline receiving the Best-In-Show award in the name of Codasip for our L31 processor IP
Brett Cline receiving the Best-In-Show award in the name of Codasip for our L31 processor IP

What was the buzz from the show floor?

Well, one company in security software whose offering is partly based on standards driven by Arm said they were expanding into RISC-V, because their customers wanted easier migration from Arm to RISC-V. In contrast though, another security software company said that for them they were not yet ready to support RISC-V based on the limited number production RISC-V devices in the field. But given the interest in RISC-V that would change in time. The industry is definitely at a tipping point – the RISC-V genie is out of the bottle!

Also, we heard an opinion about RISC-V compiler quality being better than Arm, which is interesting indeed.  There’s a long way to go, but with a rapidly growing ecosystem delivering a plethora of customizations and optimizations using the open RISC-V ISA, and with better tools on top of that, RISC-V is outstripping expectations and increasingly we think RISC-V is really starting to keep Arm and other established ISAs awake at night.

The next chance to meet the Codasip team is at DAC, at the Moscone Center in San Francisco on July 10-14 in booth #1451.

Meet us at DAC

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

5 things I will remember from the 2022 RISC-V Summit

December 22, 2022
By Lauranne Choquin

How to reduce the risk when making the shift to RISC-V

November 17, 2022
By Lauranne Choquin

DAC 2022 – Is it too risky not to adopt RISC-V?

July 18, 2022
By Brett Cline

Closing the Gap in SoC Open Standards with RISC-V


March 24, 2022

The semiconductor industry has changed hugely in the last 3 or 4 decades. Around 1980 some larger semiconductor companies were strongly vertically integrated, not only designed and manufactured their products, but even made their own processing equipment and in-house EDA tools. Today almost every semiconductor company uses 3rd party equipment for IC manufacturing and designs using 3rd party EDA tools and 3rd party IP. A key reason why the disaggregation of the semiconductor industry has happened is the use of open standards.

There is no universally agreed definition of an open standard but it is generally agreed that they are available on a reasonable and non-discriminatory basis. In many cases, especially in SoC design, such standards are available on a royalty-free basis. Many open standards are owned by independent bodies such as the IEEE, OSI and IETF (internet engineering task force) rather than by companies. In such cases the further development of the standard is through an open process with widely-based participation.

Open Standards and SoC Design

It is worth looking at open standards for SoCs from both hardware and software angles. For embedded software, C and C++ have been well-established as open standards. Middleware and real-time operating systems (RTOS) have therefore frequently been supplied as source code using one of these languages. Some porting may be necessary where there are processor- or peripheral-dependencies but generally design teams can tackle this.

In many current devices, especially in IoT, an SoC has either wired or wireless communications. Such links require communication protocols based on open standards such as Ethernet or Bluetooth LE. Such networked devices are also likely to require some sort of security and again open standards enable secure communications.

In digital hardware design, the microarchitecture is described in a hardware description language. Both Verilog and VHDL are IEEE open standards and the RTL description will be synthesized to the gate level. Processors and peripherals are frequently connected by AMBA buses which are a set of standards owned by Arm but available royalty free.

Verification will frequently be done using UVM (Universal Verification Methodology) which again is an open standard managed by the Accellera industry organisation. Power intent can be expressed in UPF (Unified Power Format) – another Accellera standard.

Finally, at the physical design level layout is required for silicon manufacturing. For decades GDSII, originally developed at Calma, has been used as the main interchange format. More recently, OASIS (Open Artwork System Interchange Standard) has been used as an open standard for layout.

The benefits of open standards

Open standards have provided many benefits to industry. Firstly, they have provided interoperability between chips, between software packages and between design tools. This has enabled disaggregation.

Secondly, if there are open standards there is an opportunity for an ecosystem of products and vendors to develop. For example, with C there are a host of software development tools available as well as middleware and RTOS products for embedded software reuse. At the hardware level there is a wide range of EDA tools that use open standards such as Verilog, UVM and OASIS. This means that development teams have a wide choice of vendors and do not need to depend on a single vendor.

Thirdly, an open standard means that one level of specification is already accomplished allowing product companies to focus on differentiation through their implementation.

However, the ‘elephant in the room’ is that there has been an obvious gap in the open standards. The ISA represents the all-important interface between hardware and software, but this has historically been almost exclusively the preserve of proprietary ISAs.

Closing the gap in open standards with RISC-V

With RISC-V there is for the first time a truly open standard for an ISA with real industry support. The ISA combines a very lightweight base integer instruction set with the flexibility of standard and custom extensions. The RISC-V ISA does not specify a microarchitecture so, for example, Codasip has developed RISC-V processor cores with three-, five-  and seven-stage pipelines thus allowing designers to match a core to their needs. IP vendors differentiate with microarchitecture.

An immediate benefit for embedded software suppliers and for SoC developers is that it is attractive to offer middleware as binaries (as well as just source code). This alone should help RISC-V adoption to accelerate by simplifying work for embedded software developers.

Using an open ISA is a catalyst for a rapidly expanding ecosystem embracing processor IP vendors, software development tool providers, software companies and semiconductor companies. Just as in networking, token ring proprietary products were squeezed out by the growing ecosystem of Ethernet around 1990, we can expect proprietary ISAs to be squeezed out by RISC-V in the coming decade.

Lastly for companies developing their own processor cores, the base instruction set is available royalty free. The modularity and extendibility of the RISC-V ISA means that basic instructions are already defined, and the developers can focus on the specific value add of their core or accelerator.

Adopting RISC-V is now a low-risk choice for embedded SoC developers. The crucial gap in SoC open standards has been closed to the benefit of both hardware and software developers.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

5 things I will remember from the 2022 RISC-V Summit

December 22, 2022
By Lauranne Choquin

How to reduce the risk when making the shift to RISC-V

November 17, 2022
By Lauranne Choquin

DAC 2022 – Is it too risky not to adopt RISC-V?

July 18, 2022
By Brett Cline

Why and How to Customize a Processor


October 1, 2021

Processor customization is one approach to optimizing a processor IP core to handle a specific workload. The idea is to take an existing core that could partially meet your requirements and use it as a starting point for your optimized processor. Now, why and how to customize a processor?

Why you should consider creating a custom core?

Before we start, let’s make sure we are all on the same page. Processor configuration and processor customization are two different things. Configuring a processor means setting the options made available by your IP vendor (cache size, MMU support, etc.). Customizing a processor means adding or changing something that requires more invasive changes such as changing the ISA, writing new instructions. In this blog post we focus on processor customization.

Customizing an existing processor is particularly relevant when you create a product that must be performant, area efficient, and energy efficient at the same time. Whether you are designing a processor for an autonomous vehicle that requires both vector instructions and low-power features, or a processor for computational storage with real-time requirements and power and area constraints, you need an optimized and specialized core.

Processor customization allows you to bring in a single processor IP all the architecture extensions you need, standard or custom, that would have been available in either multiple IPs on the SoC or in one big, energy intensive IP. Optimizing an existing processor for your unique needs has significant advantages:

  • It allows you to save area and optimize power and performance as it targets exactly what you need.
  • It is ready to use. The processor design has already been verified – you only have to verify your custom extensions.
  • You design for differentiation. Owning the changes, you create a unique product.

Now, one may think that this is not so easy. How reliable is the verification of a custom processor? Differentiation is becoming more difficult, time consuming, and sometimes more expensive. The success of processor customization relies on two things:

  • An open-source ISA such as RISC-V.
  • Design and verification automation.

Custom processor, ASIP, Domain-Specific Accelerator, Hardware Accelerator, Application-Specific processor… are all terms related to processor customization.

Customizing a RISC-V processor

Remember: the RISC-V Instruction Set Architecture (ISA) was created with customization in mind. If you want to create a custom processor, starting from an existing RISC-V processor is ideal.

You can add optional standard extensions and non-standard custom extensions on top of the base instruction set to tailor your processor for a given application.

RISC-V Modular Instruction Set. Source: Codasip.

For a robust customization process that ensures quality in the design and confidence in the verification, automation is key.
With Codasip you can license RISC-V processors:

  • In the usual way (RTL, testbench, SDK).
  • In the CodAL source code.

CodAL is used to design Codasip RISC-V processors and generate the SDK and HDK. You can then edit the CodAL source code to create your own custom extensions and modify other architectural features as needed.

Microsemi opted for this approach as they wanted to replace a proprietary embedded core with a RISC-V one. Check this great processor customization use case with Codasip IP and technology!

Processor design automation with Codasip Studio

The legacy approach to adding new instructions to a core is based on manual editing. Adding custom instructions must be reflected in the following areas:

  • Software toolchain.
  • Instruction set simulator.
  • Verification environment.
  • RTL.
Design Automation without Codasip Studio. Source: Codasip.

With the software toolchain, intrinsics can be created so that the new instructions are used by the compiler, but this also means that the application code needs updating. However, modifying the existing ISS and RTL are both potential sources of errors. Lastly, if the verification environment needs changing, this is a further area for problems. Verifying these manual changes is a big challenge and adds risk to the design project.

Some vendors offer partially automated solutions, but by not covering all aspects of processor customization they still leave room for error due to the manual changes.

Design Automation with Codasip Studio. Source: Codasip.

In contrast, with Codasip the changes are only made to the CodAL source code. The LLVM toolchain is automatically generated with support for the new instructions. Similarly, the ISS and RTL are generated to include the custom instructions and can be checked using the updated UVM environment. This approach not only saves time, but is a more robust customization process.

Create an application-specific processor with RISC-V and Codasip

As differentiation is becoming more difficult, time consuming and sometimes more expensive with traditional processor design, customizing a processor so that it will meet your unique requirements is the key. Creating an application-specific processor efficiently, without compromising PPA, requires an open-source architecture and tools to automate the design and verification process. Find out more in our white paper on “Creating Domain-Specific Processors using custom RISC-V ISA instructions”.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

design automation to drive innovation and differentiation

July 7, 2022
By Lauranne Choquin

Design for differentiation: architecture licenses for custom compute in RISC‑V

May 16, 2022
By Rupert Baines

Building the highway to automotive innovation

May 5, 2022
By Jamie Broome

What is the difference between processor configuration and customization?


August 12, 2021

For many years, people have been talking about configuring processor IP cores, but especially with growing interest in the open RISC-V ISA, there is much more talk about customization. So, processor configuration vs. customization: what is the difference?

CPU configuration is like ordering a pizza from the pizzeria

A simple analogy is to think of ordering a pizza. With most pizzerias, you have standard bases and a choice of toppings from a limited list. You can configure the pizza to the sort of taste you would like based on the standard set of options available.

Processor IP vendors have typically offered some standard options to their customers, such as optional caches, tightly coupled memories, and on-chip debug, so that they could combine them and provide the customers with suitable configurations for their needs. While doing so, the core itself remains the same, or has very limited variations. Certainly, the instruction set, register set, and pipeline would remain the same, and only optional blocks such as caches are allowed to vary.

CPU customization is like having a chef making a special pizza for you

Today, many users are demanding greater specialization and variability in their processor cores. This may be to achieve enhanced performance while keeping down silicon area and power consumption. There may be a number of ways in which this can be achieved, for example, by creating custom instructions optimized to the target application, adding extra ports and registers. Such changes fundamentally alter the processor core itself.

Returning to the pizza analogy, customization is like if a private chef has an underlying pizza base recipe but is willing not only to let you provide alternative toppings, but to modify the pizza base, with alternatives to the standard flour, oil, and yeast ingredients used. This is quite a good reason why you would want to customize a processor, isn’t it!

And this is exactly what RISC-V allows you to do. You can customize an existing RISC-V processor to meet your specific requirements by adding optional standard extensions and non-standard custom extensions.

Create a custom core with RISC-V and Codasip

Although some proprietary IP suppliers allow their cores to be extended, the greatest customization opportunity lies with RISC-V. The ISA was conceived from the outset to support custom instructions.Codasip RISC-V processorsCodasip RISC-V Processors were developed using the CodAL architecture description language and are readily customized using Codasip Studio.Codasip StudioFor more information on how custom instructions can be used to create domain-specific processors, download our white paper.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

5 things I will remember from the 2022 RISC-V Summit

December 22, 2022
By Lauranne Choquin

How to reduce the risk when making the shift to RISC-V

November 17, 2022
By Lauranne Choquin

DAC 2022 – Is it too risky not to adopt RISC-V?

July 18, 2022
By Brett Cline

Is RISC-V the Future?


July 27, 2021

Is RISC-V the future? This is a question that we often get asked, and let’s assume that we mean ‘is RISC-V going to be the dominant ISA in the processor market?’. This is certainly an unfolding situation and has changed significantly in the last five years.

What is RISC-V and what does RISC-V do?

RISC-V originated at the University of California, Berkeley, in 2010 and took a number of years to get traction with industry. A big step forward was the formation of the RISC-V Foundation in 2015 as a non-profit organization to drive the adoption of RISC-V. In early 2020, the RISC-V Foundation activity was re-branded and re-incorporated as the Swiss-based RISC-V International.

I remember exhibiting at Embedded World in 2017 and the Codasip stand had the RISC-V logo prominently displayed. Many visitors asked, “what is RISC-V?”, showing that awareness in Europe was low. Since then, the situation has changed dramatically with a high level of interest in all geographies.

For many years, we have tended to classify processors into silos such as MPU, MCU, GPU, APU, DSP, etc. Some devices, such as mobile phones, would combine multiple types of processor cores in their designs. If we think back to, say, 2016, the MPU world was dominated by the X86 architecture while Arm dominated both APUs (application processors and the mobile phone ecosystem generally) and MCUs.

Is RISC-V better than Arm? Is RISC-V better than x86 from Intel/AMD? It definitely is different and brings new opportunities. Today we can identify a few new trends in the market that RISC-V is enabling. Let’s look at 3 of them.

Trend 1: RISC-V is shifting up in performance

In the early years of RISC-V, it was mainly used on academic projects. However, by 2016 a wide range of commercial companies were developing embedded microcontrollers based on the RISC-V ISA. It could be argued that this was a relatively easy step for the RISC-V community, given that embedded developers are used to building their systems from a variety of sources, including middleware delivered as source code. Also embedded cores are simpler in complexity.

RISC-V processor performance trending upwards. Source: Codasip.

What is more challenging is moving into application processors, with considerably more complexity required to support rich operating systems such as Linux or Android. In the case of mobile phone applications, there is a complex ecosystem which will take a while for RISC-V vendors to support. Nevertheless, there are plenty of other opportunities for RISC-V application processors in systems which use Linux, and there is a choice of IP cores such as Codasip’s A70 addressing mid-range performance.

Finally, we can expect more and more suppliers to create complex RISC-V cores for high-performance computing in the future.

Trend 2: RISC-V is breaking down the barriers between processor types

With semiconductor scaling failing, the boundaries between traditional processor grouping are blurring. With more and more demand for domain specific accelerators to achieve cost-effective performance on-chip, it is more and more necessary to tune the design to the needs of the workload required.

With the RISC-V ISA, having a minimalist base integer instruction set and providing for custom extensions, it is an ideal starting point for creating special accelerators.

While some applications, such as mobile phones, with complex legacy software are unlikely to change architecture in the short term, others have no constraints. New applications, such as  AI (artificial intelligence), are moving to RISC-V as the open ISA with flexibility and customization. And in a more distant future, RISC-V has the potential to gain even greater market share as legacy considerations cease to apply.

Trend 3: Customers want to avoid a monopoly supplier

Finally, there is a strong desire for change in the processor market. Since the 1980s, microprocessors have been dominated by the Intel/AMD X86 duopoly, but in the late 1990s, Arm became the de-facto standard in the mobile phone processor market. That monopoly extended further into adjacent areas, including embedded.

For the last decade, I have often heard engineers talk of “Arm fatigue” and disquiet with the monopolist position and vendor lock-in in key markets. However, as long as Arm could claim ‘Swiss neutrality’ with their broad product range, nobody would be fired for licensing Arm. With their acquisition by SoftBank, that neutrality was seriously eroded, and the now failed Nvidia merger attempt unsettled many licensees.

The free and open RISC-V ISA has seen widespread interest and is likely to be a catalyst for a sea change in the market. As an open standard, it has the potential to be relevant for decades, and with multiple suppliers offering processor cores, it avoids vendor lock-in.

RISC-V shipments predicted to grow strongly. Source: Semico Research Corporation.

While nobody expects architectures with a rich history – such as X86 or Arm – to disappear overnight, for the first time in decades designers have a viable alternative in RISC-V. With RISC-V covering a greater and greater range of performance and having a rapidly expanding ecosystem, the market share for RISC-V will continue to grow. This is reflected by market reports such as Semico Research, predicting that the market will consume 62.4 billion RISC-V CPU cores by 2025.

RISC-V surely has a rapidly growing future and a great chance of being a dominant architecture.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

5 things I will remember from the 2022 RISC-V Summit

December 22, 2022
By Lauranne Choquin

How to reduce the risk when making the shift to RISC-V

November 17, 2022
By Lauranne Choquin

DAC 2022 – Is it too risky not to adopt RISC-V?

July 18, 2022
By Brett Cline

Domain-Specific Accelerators


May 21, 2021

Semiconductor scaling has fundamentally changed

For about fifty years, IC designers have been relying on different types of semiconductor scaling to achieve gains in performance. Best known is Moore’s Law which predicted that the number of transistors in a given silicon area and clock frequency would double every two years. This was combined with Dennard scaling which predicted that with silicon geometries and supply voltages shrinking, the power density would remain the same from generation to generation, meaning that power would remain proportional to silicon area. Combining these effects, the industry became used to processor performance per watt doubling approximately every 18 months. With successively smaller geometries, designers could use similar processor architectures but rely on more transistors and higher clock frequencies to deliver improved performance.

48 Years of Microprocessor Trend Data. Source K. Rupp.

Since about 2005, we have seen the breakdown of these predictions. Firstly, Dennard scaling ended with leakage current rather than transistor switching being the dominant component of chip power consumption. Increased power consumption means that a chip is at the risk of thermal runaway. This has also led to maximum clock frequencies levelling out over the last decade.

Secondly, the improvements in transistor density have fallen short of Moore’s Law. It has been estimated that by 2019, actual improvements were 15× lower than predicted by Moore in 1975. Additionally, Moore predicted that improvements in transistor density would be accompanied by the same cost. This part of his prediction has been contradicted by the exponential increases in building wafer fabs for newer geometries. It has been estimated that only Intel, Samsung, and TSMC can afford to manufacture in the next generation of process nodes.

Change in design is inevitable

With the old certainties of scaling silicon geometries gone forever, the industry is already changing. As shown in the chart above, the number of cores has been increasing and complex SoCs, such as mobile phone processors, will combine application processors, GPUs, DSPs, and microcontrollers in different subsystems.

However, in a post-Dennard, post-Moore world, further processor specialization will be needed to achieve performance improvements. Emerging applications such as artificial intelligence are demanding heavy computational performance that cannot be met by conventional architectures. The good news is that for a fixed task or limited range of tasks, energy scaling works better than for a wide range of tasks. This inevitably leads to creating special purpose, domain-specific accelerators.
This is a great opportunity for the industry.

What is a domain-specific accelerator?

A domain-specific accelerator (DSA) is a processor or set of processors that are optimized to perform a narrow range of computations. They are tailored to meet the needs of the algorithms required for their domain. For example, for audio processing, a processor might have a set of instructions to optimally implement algorithms for echo-cancelling. In another example, an AI accelerator might have an array of elements including multiply-accumulate functionality in order to efficiently undertake matrix operations.

Accelerators should also match their wordlength to the needs of their domain. The optimal wordlength might not match common ones (like 32-bits or 64-bits) encountered with general-purpose cores. Commonly used formats, such as IEEE 754 which is widely used, may be overkill in a domain-specific accelerator.

Also, accelerators can vary considerably in their specialization. While some domain-specific cores may be similar to or derived from an existing embedded core, others might have limited programmability and seem closer to hardwired logic.  More specialized cores will be more efficient in terms of silicon area and power consumption.
With many and varied DSAs, the challenge will be how to define them efficiently and cost-effectively.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

Building the highway to automotive innovation

May 5, 2022
By Jamie Broome

RISC-V Summit 2021

December 17, 2021
By Rupert Baines

Why and How to Customize a Processor

October 1, 2021
By Roddy Urquhart

What is an ASIP?


April 16, 2021

ASIP stands for “application-specific instruction-set processor” and simply means a processor which has been designed to be optimal for a particular application or domain.

General-purpose vs. domain-specific or application-specific processors

Most processor cores to date have been general-purpose, which means that they have been designed to handle a wide range of applications with good average performance. This may mean that if you have some special computationally intensive algorithm, such as audio processing, you may need a high-performance core (for example with a SIMD unit, or zero overhead loops) or a high clock frequency to achieve your needs. This may result in exceeding your silicon or power budget.

An alternative is to create an ASIP which has a specialized architecture, optimized to efficiently achieve the required performance for the audio processing. The ASIP would not usually be designed to optimally handle more generic operations such as those needed for an operating system. Instead, if an OS is needed, you would probably run it on a separate general-purpose core which would not be constrained by the need to run audio processing algorithms. Thus, the ASIP design is optimized for performance with just enough flexibility to meet its use case.

ASIPs have been the subject of research in universities around the world and have been applied to a number of domains such as audio signal processing, image sensors, and baseband signal processing. Codasip founder and President Karel Masařík with CTO Zdeněk Přikryl researched design automation for ASIPs at the TU Brno. To read more on their research, check the papers listed at the end of this article.

ASIPs to address semiconductor scaling issues

ASIPs or domain-specific processors are likely to be used more widely in the future due to semiconductor scaling issues. For decades, developers of SoCs have relied on Moore’s law and Dennard Scaling to get more and more performance and circuit density through successively finer silicon geometries.

While this scaling applied, it was generally reasonable to use general-purpose processor cores and to rely on new generations of silicon technology to deliver the performance required with an acceptable power consumption. However, this scaling has been known to have broken down, with smaller increments in performance improvement and leakage current worsening power consumption. Because of this, the semiconductor industry must change, and a new approach to processing is needed.

To date, this industry challenge has been tackled by incorporating different types of general-purpose cores on a single SoC. For example, mobile phone SoCs have combined application processors, GPUs, DSPs, and MCUs, but none of these classes of processor have application-specific instruction sets.

With new products demanding new algorithms for artificial intelligence, advanced graphics and advanced security, more specialized hardware accelerators are needed and are being developed. Such accelerators are designed to handle computationally demanding algorithms efficiently. Each accelerator will need an optimized instruction set and microarchitecture – in other words, an ASIP.

Domain-specific accelerators. Source Codasip.

ASIP and software development

Domain-specific processors (Application-specific processors, ASIP… you will notice we use these terms interchangeably) combine hardware specialization with flexibility through software programmability.

One challenge of creating custom hardware is ensuring the needs of software developers are met too. Techniques such as intrinsic instructions allow direct access to the instruction set from C, but reduce the flexibility of the code. They may still be a viable answer when complex functions are encapsulated in a single instruction.

But better still would be a top-down approach. Instead, a compiler created for your domain-specific accelerator, automatically targets the custom instructions you create. This, along with an instruction set simulator and profiler, accelerates the rate at which you can try different design iterations to converge on an optimal solution.

Codasip Studio provides a complete solution for this process with the ability to generate ISA and compiler from a simple instruction level model.

Design automation and verification of application-specific processors

With the need to develop many and varied accelerators, it is not efficient to develop the instruction sets and microarchitecture manually. Application software can be analyzed by profiling and the instruction set tuned to those needs. A processor design automation toolset like Codasip Studio can be then used to generate a software toolchain and an instruction set simulator.

Specifically, Codasip Studio automates the generation of a C/C++ compiler that is fully aware of the instruction set and can infer specific instructions automatically. It is important to have the C/C++ compiler in the loop, to see the impact of the application-specific instructions. The same applies for instruction set simulators, debuggers, profilers, and other tools in SDK that are automatically generated. Codasip Studio can be also used to generate hardware design including RTL, testbenches, and a UVM environment.

Did you know?

Codasip was originally founded to create co-design tools for ASIPs, hence the name “Co-dASIP”. Codasip Studio has subsequently been used for creating more general-purpose cores such as RISC-V embedded and application cores too.

Processor design does not end with generating the RTL code – the dominant part of the design cycle is verification and it needs to be rigorous. As Philippe Luc explained to Semiconductor Engineering, RTL verification is multi-layered and complex.

Codasip Studio can automate key parts of the verification process in order to help the full flow of ASIP design being quicker. Among other things, Codasip Studio provides automated coverage points for the optimised instructions. The provided UVM environment makes it easy to run programs (including the new instructions) on both model and RTL with result comparison. A constrained random program generator is provided to help closing coverage and ensure quicker verification for our customers.

To learn more about the power of Codasip Studio combined with RISC-V, read our white paper on “Creating Domain-Specific Processors with custom RISC-V ISA instructions”.

Codasip research papers on design automation for ASIP

  • MASAŘÍK Karel, UML in design of ASIP, IFAC Proceedings Volumes 39(17):209-214, September 2006.
  • ZACHARIÁŠOVÁ Marcela, PŘIKRYL Zdeněk, HRUŠKA Tomáš and KOTÁSEK Zdeněk. Automated Functional Verification of Application Specific Instruction-set Processors. IFIP Advances in Information and Communication Technology, vol. 4, no. 403, pp. 128-138. ISSN 1868-4238.
  • PŘIKRYL Zdeněk. Fast Simulation of Pipeline in ASIP simulators. In: 15th International Workshop on Microprocessor Test and Verification. Austin: IEEE Computer Society, 2014, pp. 1-6. ISBN 978-0-7695-4000-9.
  • HUSÁR Adam, PŘIKRYL Zdeněk, DOLÍHAL Luděk, MASAŘÍK Karel and HRUŠKA Tomáš. ASIP Design with Automatic C/C++ Compiler Generation. Haifa, 2013.
Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

Why Codasip Cares About Processor Verification – and Why you Should too

February 28, 2022
By Philippe Luc

Why and How to Customize a Processor

October 1, 2021
By Roddy Urquhart

What does RISC-V stand for?

March 17, 2021
By Roddy Urquhart

What does RISC-V stand for?


March 17, 2021

RISC-V stands for ‘reduced instruction set computer (RISC) five’. The number five refers to the number of generations of RISC architecture that were developed at the University of California, Berkeley since 1981. It is pronounced “risk-five” and you might sometimes see it written “RISC five“ or “R5“).

The RISC concept (like the parallel MIPS development in Stanford University) was motivated by the fact that most processor instructions were not used by most computer programs. Thus, unnecessary decoding logic was being used in processor designs, consuming unnecessary power and silicon area. The alternative was to simplify the instruction set and to invest more in register resources.

The RISC I project implemented a mere 31, 32-bit instructions but 78 registers. It introduced the notion of register windowing which was a technique that was later adopted by the SPARC architecture. This was closely followed by the RISC II project which had an even larger register file (138 registers). RISC II also introduced 16-bit instructions which improved code density. The terms RISC III and RISC IV have been used to refer to the SOAR and SPUR projects in 1984 and 1988, respectively.

The RISC-V project was partly motivated by the fact that proprietary, closed ISAs were restricted by patents and license agreements. Krste Asanović of the University of California, Berkeley was convinced that there were great advantages in having a free, open ISA that could be applied to both academic and industrial projects. David Patterson, who had worked on earlier RISC projects, was also involved. In 2010, a 3-month project led to the development of a new instruction set, leading to the publication of the first RISC-V ISA specification in 2011.

In order to manage contributions from other parties, the RISC-V Foundation – now RISC-V International – was formed in 2015 as an open collaborative community, and the original developers of the ISA transferred their rights around RISC-V to this organization. Since then, RISC-V International has managed further development of the RISC-V ISA.

Roddy Urquhart

Roddy Urquhart

Related Posts

Check out the latest news, posts, papers, videos, and more!

5 things I will remember from the 2022 RISC-V Summit

December 22, 2022
By Lauranne Choquin

How to reduce the risk when making the shift to RISC-V

November 17, 2022
By Lauranne Choquin

DAC 2022 – Is it too risky not to adopt RISC-V?

July 18, 2022
By Brett Cline