CadenceLIVE Silicon Valley – OnDemand

3D-IC / Chiplets

3D-IC Hierarchical Flow Using Cadence Integrity 3D-IC Platform and Innovus

Heterogenous integration, the concept of using advanced packaging and silicon implementation techniques for effectively stacking multiple silicon dies is becoming ever more complicated. Engineers across multiple electronics design disciplines must weigh many different tradeoffs, from cost, performance, thermal footprint, signal integrity, etc. at earlier and earlier stages in the silicon & system design process. Using the Cadence Integrity 3D IC Platform and Integrity System Planner enables a cross-domain approach to solving these complex problems across design teams. Leveraging Cadence Integrity 3D-IC Platform is a 3D capable superset of the Innovus implementation system with expanded links to system multiple level signoff tools and flows, so can naturally be applied to Broadcom’s hierarchical ASIC flow that need to consume repeated dies or multiply instantiated IP within a given die.

In this work, we will share Broadcom’s approach of implementing complex 3D chips structures using the Integrity 3D-IC Platform. This approach will not only reduce the time to complete the design implementation but also reduce the required manpower. This session also presents the enhancement  to achieve custom routes between dies while meeting high speed signal integrity requirements.

Suvarna Bharti, Broadcom

A Systematic Approach to 3D Cutline Exploration and Benchmarking

As wafer cost continues to increase at a rapid pace, there is a growing demand to convert more of our 2D SoCs into 3D System-In-Package designs. Furthermore, as individual IPs get larger and more complex, we see a need to disaggregate these designs along arbitrary boundaries, or “cutlines”, rather than along standard fabric interfaces as has been done in the past. This results in large numbers of high-speed ad hoc interfaces on the die boundaries and creates a need for cross-die optimization techniques. Silicon architects and floorplanners need robust and intuitive methods to rapidly create and assess different configurations in the early planning phase of the design, so that they can deliver the best mix of Performance, Power, Area and Cost for the product. This paper presents these construction and analysis techniques on 2 different designs – a low-power Crypto core that explores several cutlines, and a high-speed compute module that explores different bump pitch and floorplan options. We present exhaustive studies and KPIs that can support cutline decisions, including 2D/3D PPA comparison, 3D IR/Thermal plots, 2D vs 3D QoR (ex. buffer/inverter count & routing length), D2D Bump-to-Flop distance monitoring, D2D timing paths analysis, and 2D vs 3D metal layer usage.

Vivek Rajan, Intel

Advanced Rapid Prototyping and Validation Methodology in Three-Dimensional Integrated Circuits (3DIC)

3D integrated circuits have emerged as a new architectural solution to address the design challenges that limit the ability of two-dimensional ICs to exceed certain PPA thresholds. Although there have been significant industry breakthroughs and rapid research progress in the past decade, the widespread commercial adoption of 3DIC design has been limited. To achieve broad adoption of 3DIC, it is necessary for designers and teams across various disciplines to carefully consider cost and manufacturing complexity while establishing leading-edge EDA tool flow methodologies. In this presentation, we address these challenges by outlining Cadence's approach to a rapid prototyping methodology for 3DIC design. This flow requires early-stage design collaterals such as micro-architectural chip diagrams, system interface bus protocols, third-party IPs like HBM, and targeted technology collaterals from the foundry. In the second phase of the rapid prototyping flow, we explore the validation process for 3DIC design to analyze and sign off on these complex devices, including EM/IR analysis among chiplets and system-level thermal analysis when all chiplets are assembled into an advanced package and placed on a PCB.

The flow starts with Cadence's SOC Architectural Information (SAI) framework, which generates inputs for monolithic 3D designs that are integrated as individual chiplets. Designers can generate connectivity in SAI based on the microarchitecture diagram and facilitate early planning for Through-Silicon Vias (TSV), interposer-based stacking, and 3D Early Floorplan Synthesis. Once the design is modeled according to the micro-architecture specifications, we utilize the SAI-generated design to assess implementation feasibility through 3D system-level EM/IR and thermal analysis. This enables the exploration and analysis of multiple design scenarios by altering design parameters such as 2D/3D connectivity, logic, and cell count, TSV count, etc., either globally or locally.

In conclusion, we present real-world outcomes of how this 3DIC rapid prototyping methodology has provided Meta with valuable insights with a short turnaround time to verify the robustness of TSV planning. This presentation should be beneficial for implementation engineers looking to evaluate, design and validate cutting-edge 3DIC structures.

Nitish Natu, Meta
Young Gwon, Cadence

Chiplets Cooking Up a Storm: How to Assure System Performance in Today's Fast-Changing Arm SoC World?

Arm recently announced the Neoverse N3 Platform, a high-performance compute sub-system (CSS) targeted for chiplet-based multi-die compute clusters across High-Performance Compute and Infrastructure domains. This paper introduces the key components of the subsystem, the performance considerations when integrating memory and I/O IP into this system, the analysis methodologies and tools to help achieve performant systems.

The presentation will reference the Arm Performance Cookbook, a guide for modern Arm SoC performance optimization, co-authored by Arm and Cadence, which is being launched at the CDNLive! SJ event. Examples from the book are used to illustrate how to ensure best-in class performance is achieved during IP integration using Cadence System VIP tools.

It will introduce the Neoverse N3 core and the topology of the system and highlight key performance features of the Arm Coherent Mesh Network IP, the critical hub of the subsystem where 3rd party IP will be integrated. It will show how a Rapid Adoption Kit from Cadence can help customers quickly get to grips with the performance characteristics of the IP from both a non-coherent and coherent perspective, and how Cadence System Performance Analyzer can give key insights into DDR memory controller integration and performance optimization.

Overall, this presentation and the referenced book aim to demystify the selection, configuration and path to performance for Arm based SoC development.

Colin Osborne, Arm
Nick Heaton, Cadence

Expanding Innovation for 3D-IC Design Through Research Partnerships and Standards

In this sessions Cadence R&D will present the latest developments in the Integrity 3D-IC platform along with research work conducted with IMEC related to 3D integration , PPA analysis and die stacking beyond 2 tiers. Latest update on industry standard formats will be presented as well along with a brief peek into whats coming next for the Cadence’s unified 3D-IC Platform Solution.

CT Kao, Cadence

Multi-Chip Integration Using Cadence Integrity 3D-IC Platform with TSMC 3Dblox

Today, 3D chiplet-based designs are becoming more complicated, needing to consider all the heterogeneous components from chip, chiplet, and interposer required for CoWoS and SoIC assembly even before each individual die size and design information are known. Using the standardized 3Dblox language and data format, the complex die stacking structure and connectivity can be defined in a systematic approach. Leveraging Cadence Integrity 3D-IC Platform’s native support of data conversion between Cadence iHDB and 3Dblox import/export, we can easily integrate all design data from each die and execute multi-die stacking verification.

In this work, we will share GUC’s experience on one Wafer-on-Wafer design done using the Integrity 3D-IC Platform for system-level floorplaning and bump planning while using iHDB and 3Dblox to maintain per-die design database, libraries, and technology files. This session will also show how 3Dblox works in multi-chip concurrent verification/analysis flows, such as 3D-stack DRC/LVS, cross-die RC extraction and STA, and 3D-IR analysis to avoid the error-prone die-to-die manual mapping.

Yen Chiu, Global Unichip

Samsung System level Memory IP and EDA Solution with Cadence’s Unified 3D-IC Platform for the Multi-Die Era

With the industry looking for innovative high memory bandwidth solutions in the AI ERA, efficient Multi-Die design with high speed DRAMs on advanced packages has high demand in the Hyperscalar and Datacenter customers.  In this session, Samsung Memory will present a its unique Samsung memory-aware STCO platform built in collaboration with Cadence’s Integrity 3D-IC platform .  The combined solution covers all stages of the design from Planning to system level signoff with different Cadence technology pulled in at each stages for different verification checks. This collaboration offers unique value with optimal design flows, tools and methodologies built and verified for high-performance designs needing state of the art EDA and Memory technology for Multi-Die designs.

Jinwon Kim, Samsung

Advanced Multi-Die Packaging

Designing High Performance Sensor Packages Ensuring Optimize Performance

Sensors are increasingly being integrated into compact and portable devices. Designing small, lightweight sensors with minimal footprint while maintaining performance specifications can be challenging.

In this presentation, we are going to showcase how Dassault Systems 3DEXPERIENCE platform provides a comprehensive, integrated environment for product design and development, connecting Cadence Allegro Package Designer users with Solidworks users to seamlessly work on sensor design alongside other aspects of the product, such as mechanical and electronics design. Both can quickly iterate on sensor designs to optimize performance and meet design requirements. They can explore different design alternatives, analyze the impact of design changes, and make informed decisions to achieve the desired outcomes.

Dinesh Panneerselvam, Dassault Systèmes
Steve Durrill, Cadence

Delivering the Full Benefits of Maskless Lithography and Adaptive Patterning with Comprehensive Design Automation

Producing advanced packaging design files ready for manufacturing involves various design tools with different workflows. While the EDA industry offers automated workflows, manual data imports and processing steps are still quite common which may cause increased time to market and greater margin for error. Moreover, when designing for Adaptive Patterning with maskless lithography, a full panel or wafer layout is required, not just a single package. Fully digital exposure without a mask offers the incredible ability to quickly adjust designs based on assembly and fabrication results, allowing an iterative and rapid feedback cycle. However, automation beyond traditional EDA is required to truly take advantage of this capability.

This presentation will explore Deca’s comprehensive design automation system, called AP Studio, which delivers fast, reliable, and unified flows from layout to production ready artifacts. This system interfaces flawlessly with Cadence APD+ and SiP layout tools, exports to a standardized format, performs layout post-processing, generates wafer and panel layouts, prepares the layout for Adaptive Patterning, and produces recipe setup data for assembly and fabrication equipment such as chip attach, die measurement system (DMS), automated optical inspection (AOI), and laser direct imaging (LDI). Once through the design flow, the layout is simulated with AP Engine for verification. Unique aspects of the flow will be presented, including how per-wafer OCR and 2D codes and per-package 2D codes can be imaged directly into metal layers. This extensive design automation not only solves longstanding challenges, but also empowers designers with the full benefits of maskless lithography.

Gaurang Gunde, Deca Technologies

Meeting Future Performance Demands Through Packaging

This presentation will describe an advanced packaging methodology, specifically targeted to multi-die packages. It will also show existing designs, and then show the advantages of the new method. This includes SI/PI improvements at well as the complexity of the design.

Jeff Cain, Chipletz

Accelerating Design Advantages Through the Integrated Design Ecosystem

Today’s semiconductor technology roadmaps comprise complex performance requirements that are driving advanced packaging trends, yet present unique package design challenges. Frontline chiplet and heterogeneous integration developments are emerging to push technology boundaries and are elevating demand for innovative design flows and circuit-level simulations to accelerate complex design achievements. ASE has partnered with several EDA companies to help enable the Integrated Design EcosystemTM  (IDE) to address design challenges around its VIPack platform structures and extensively improve both design efficiency and quality in parallel with shortening time-to-market for customers. Cadence is a key EDA partner in which their Allegro X Design Platform is one the key EDA tools that is foundational to VIPack platform.

This design flow, including Cadence’s Allegro X, allows a seamless transition from single die SoC to multi-die disaggregated IP blocks including chiplets and memory for integration using 2.5D or advanced fanout structures. This flow enables new design efficiencies and sets new standards for quality and user experience. The integration of critical package design tool capabilities into ASE’s workflow has resulted in significant cycle time reduction while lowering customer costs.

Enhanced features of IDE include cross platform interaction encompassing layout and verification, advanced RDL and silicon interposer auto routing with embedded design rule checking (DRC), and Package Design Kit (PDK) implementation in the design workflow.

Mark Gerber, ASE

In Design Electrical Analysis Design Layout with Intention vs. Post-Design Modeling/Rework

The traditional roles of a layout designer and electrical engineer are being reassessed as advances in package design software expand to support more functionality within the layout editor application. Designers are being tasked to do more electrical considerations and trade-offs earlier in the design creation phase to support an improved electrical SI/PI final signoff verification. Electrical engineers are now requested to define the SI/PI architecture up front and then delegate the iterative electrical tuning processes to the layout designer. While this might appear to create an overlap of electrical analysis, however, the discipline of each role is differentiated by the type of deliverable. 

  Initially, the electrical design process was thought to be achieved through a best practice style of trace routing followed by an electrical engineer's modeling and feedback, resulting in numerous redesign cycles. However, with the need for advanced SI PI package performance, the focus has turned to the necessity for intentional routing strategies, which help to reduce the amount of impact the redesign rework has on the design layout, thereby reducing the overall number of detailed SI/PI performance signoff style reviews. 

With today's emphasis on creating high-performance and cost-effective advanced package design solutions, the designer and the electrical engineer will benefit from the electrical modeling guidance provided by Cadence's (In Design Analysis) software tools. The simplified electrical modeling workflows that are now available for use by the designer have enabled the Electrical Engineer to focus more on defining future electrical IDA architecture configurations and enhance the trusted verification modeling signoff process.

Jonathan Micksch, Amkor Technology, Inc.

AI-Generated Constraint Methodology for PCB and IC Package Design Teams

From the industry’s first implementation of constraint-driven design flow in the mid-1990s to the present day, the demand for integration between design and analysis tools with automation has grown significantly because of the need for seamless collaboration and data exchange during different design stages. As a preferred result, the realization of the tight integration and complete automation would ultimately be enhancing productivity and efficiency.

The major challenges to the desired integration between design and analysis tools can be listed as:

1. Design and analysis tools operate independently.

2. The users of design and analysis tools have different engineering training backgrounds and technical skills.

3. The information exchange between designers and SI engineers is not efficient, sometimes even difficult.

As the results, the communication on design change requirements between designs and SI engineers is still following paper trails or depending on spreadsheets. This becomes the significant road blocker for the needs of reducing design cycles for complex designs today.

This presentation demonstrates the industry’s first constraint integration flow with AI-based optimalization between Allegro, the leader of constraint-driven solution, and Topology Workbench, the powerful system level analysis tool.

To achieve the goal, this flow addresses the challenges by following innovation.

1. Though Allegro and Topology Workbench are different environments, a path is built up to link Allegro’s Constraint Manager and simulation results in Topology Workbench.

2. A generalized GUI form is introduced that reads in preferred/optimized design parameters generated by simulation, sweeping, or optimization, and presents to layout designers.

3. By mapping the rules in Constraint Manager and the simulated parameters from Topology Workbench, the new flow creates design rule sets in Constraint Manager input format.

4. Users have the choices to save the rule set for future designs or to update the constraints in an existing design, and to check and apply the rule set in Constraint Manager.

Michael Catrambone, Cadence
Brad Griffin, Cadence

EMIB Based Advanced Packaging Flow for Multi-Die Heterogeneous Integration

Heterogenous device integration has become a critical capability need for the next generation of AI, Data Center, Graphics and FPGA devices. Advanced packaging technologies such as EMIB (embedded multi-die interconnect bridge), Foveros & Co-EMIB are enabling a dramatic increase in interconnect density and bandwidth, allowing the integration of multiple types of heterogenous die (Logic, memory, transceiver, and multiple nodes) onto a single package. In addition to dense die to die interconnect, there is a significant need for advances in design flows and methodologies, power delivery, high speed I/O interconnects and thermal cooling technologies to enable of next generation of heterogenous devices. This presentation covers the unique tools and flow challenges related to system level planning and assembly that come with localized high-density between two or more die on an organic package connected with an EMIB. The flow will also cover EMIB die and Package physical implementation driven by Integrity 3D-IC Platform. Finally, we discuss the electrical and signoff methodologies to complete the system level analysis.

Zain Ali, Intel
Jags Jayachandran, Cadence

Automotive

Addressing Tomorrows' Sensor Fusion Need in the Automotive Compute with Cadence

Automotive architectures continue to evolve with the current focus on software-defined vehicles and the demand for single SoCs and a common software architecture for central compute, zonal architectures and digital cockpit. Automotive vendors are also designing for feet off to hands off to eventually eyes off. These trends are driving the need for cars to have a large number of sensors, as well as different types of sensors such as camera, radar and lidar. Driver and occupancy monitoring systems and privacy concerns are also driving the need for different cameras and radar sensors inside the car. In the past, specialized and discrete processors were used to handle image, radar and lidar data independently. However, these requirements have since evolved and the need for a singular and unified processor has become apparent. Aside from the classical approaches, AI is now also becoming prevalent for radar and sensor fusion applications. In this presentation, we plan to cover Cadence's answer for such requirements.

Pulin Desai, Cadence

Empowering Infotainment Systems with the Latest DSP advancements

As infotainment systems evolve, they must handle diverse workloads, including audio, voice, speech, AI, and vision. This necessitates a flexible and scalable Digital Signal Processor (DSP) solution that can serve as an offload engine for the main application processor. Using a single DSP for various workloads presents a cost-effective solution for high-performance, low-power processing, aligning perfectly with Electric Vehicle (EV) requirements.

The latest DSP introduces several features that significantly enhance performance, including Double Precision Floating Point Unit (FPU): This feature improves audio performance and Signal-to-Noise Ratio (SNR) by leveraging 64-bit processing over 32-bit. It enables the management of more than 32 speakers without performance degradation.  Improved AI Capabilities can perform Engine/Road noise reduction, shaping the audio output to highlight voice and dialogues, and adapting the cabin soundfield, contributing to a quieter and more enjoyable driving experience.

These advancements position the latest DSP as a powerful tool in the evolution of infotainment systems, delivering superior performance and versatility.

Casey Ng, Cadence

Functional Safety Solution

As state-of-the-art electronics propel the automotive industry into a future of more connectivity and autonomy, the development of safety-compliant semiconductors is critical. In this session Cadence will present our solution for FMEDA-driven functional safety verification. The session will show how comprehensive FMEDA, driving a fault campaign executed with a multi-engine approach to fault execution, is essential for automotive semiconductor providers to demonstrate compliance to the various ASIL metrics specified by ISO 26262.

Attendees will take away how they can give safety compliance assessors the necessary confidence that safety goals are met, while attaining maximum reuse of the existing functional verification environment and minimizing the time and effort needed for functional safety verification.

Pete Hardee, Cadence

Jumpstarting the Automotive Chiplet Ecosystem with the Arm's Latest Automotive Enhanced IP Portfolio and Cadence Interface IP

In this presentation Arm and Cadence present how our collaboratively designed chiplet reference platform for ADAS systems will jumpstart collaboration and innovation in the automotive ecosystem. Our collaboration is founded on the confluence of two seemingly contradictory industry trends, the move towards high performance centralized computation to enable the software defined vehicle, and the disaggregation of computation into discrete chiplets that can be independently verified and certified. The centralization is driven by the increased needs for efficiency and fuel economy while autonomy and electrification drive the need for high performance. Simultaneously the industry needs to maintain and increase safety certification and product quality which are often in conflict with aggregated complexity. A multi-chiplet based architecture including both high performance compute chiplets and lower complexity safety certified chiplets achieves both goals while enabling members of the ecosystem to innovate through custom accelerator and application specific chiplets.

The recently announced  Automotive Enhanced IP portfolio from Arm provides both the high performance and safety certified cores necessary to give this design a firm foundation. Cadence interface IPs provide the back-bone for certifiable interfaces between chiplets and configurable Cadence Tensilica AI accelerators provide a starting point for innovation. With a goal of presenting each chiplet in the design as both a fully enabled tapeout ready example and as a reference design ready for customization in Cadence design tools the ADAS system serves as a jumping off point of innovation within the ecosystem.This flexibility enables ecosystem members to select some chiplets as early representation of products to purchase from a silicon supplier while focusing on other chiplets as th basis for their own innovation. Additionally, the reference implementations will be available for interoperability testing without the need to adopt full production licensing.

Johannes Bauer, Arm
Junie Um, Cadence
Ross Dickson, Cadence

Shifting Left Automotive Chiplet Designs with Virtual Platforms and SOAFEE Software Architectures

Every major automotive OEM is exploring how to efficiently enable silicon-based product differentiation in the coming world of software defined vehicles. Many have determined the most efficient solution is to adopt a chiplet based architecture on the silicon side and use a portable yet deeply embedded software architecture for the software, such as the SOAFEE community is demonstrating. Each architecture offers a range of options for efficient optimization and innovation while sfacing a key challenge: both require someone else to build the rest of the system first. In an industry with a strong commitment to safety, finding a first mover can be a challenge. Cadence and Arm have committed to solving this gordian knot to enable progress in the industry. As already announced, Arm and Cadence are developing a multi-chiplet reference solution for ADAS based on the Arm's latest generation of Automotive Enhanced IP  and Cadence Interface IP accelerated with Cadence Tensilica configurable AI accelerators. Additionally Cadence and Arm are producing Helium Virtual and Hybrid Platform models of this design to enable customer to begin experimenting and extending the platform well before silicon has been manufactured.

This is more than just the usual “use a virtual platform to shift-left”, though that is certainly part of the value. The creation of Helium based Hybrid Platform references enables customers to quickly begin verification of their own chiplet designs in a full system context. On the software side, the silicon architecture has been designed to be fully SOAFEE compliant so that SOAFEE participants can now have an example auto side platform for testing in addition to the server side already made popular by several vendors. Cadence will work to coordinate the enablement of SOAFEE compliant operating systems from our partner’s to ensure ease of adoption across the ecosystem.

Ross Dickson, Arm
Johannes Bauer, Arm

Cloud

Cadence Hybrid Cloud

In this presentation, we will explore how Cadence utilized NetApp® FlexCache® to manage and optimize both on-premises and cloud storage effectively. Specifically, we will discuss how we are leveraging a FlexCache solution as a part of the storage infrastructure to implement effective job placement across on-premises and cloud regions to improve the agility and elasticity of our computing infrastructure.

Michael Johnson, NetApp
Suresh Pachiyappan, Cadence

Cadence Managed Cloud for Cost Efficient and Productive Chip Design

Join us for an informative session as we unveil the capabilities of our cloud solutions designed to revolutionize EDA workloads. Whether you require completely hosted environments or need peak/burst capacity, our cloud solutions offer unparalleled flexibility and efficiency.

We will discuss how Cadence Managed Cloud can optimize cost-efficiency and productivity for your chip design projects. Learn how to harness the full potential of cloud technology to streamline your EDA workflows and achieve remarkable results. Discover how to kickstart your journey with a 30-day free trial.

Kushal Koolwal, Cadence

Cadence Pegasus System on AWS Cloud ParallelCluster

AWS has opened the door for the Silicon Designers to design and verify their designs on the cloud by providing instant access to thousands of CPUs that are self-managed for cost optimization (AWS ParallelCluster).

Pegasus TrueCloud lets users launch physical verification jobs on the cloud without having to copy any Design or Foundry IPs to the Cloud.  This innovation alone address concerns related to IP security within the EDA industry and opens almost unlimited amount of compute resources otherwise not available.  

Moreover Pegasus Console offers a unique way of monitoring distributed Pegasus jobs on the Cloud; It lets designers update, analyze and debug Pegasus jobs.  And Pegasus FlexCompute contributes to 20-40% higher CPU utilization and achieves fastest TAT: No need for the designer to predict how many CPUs are needed for the best run-time.

Pedro Gil, AWS
Dibyendu Goswami, Cadence

Improve Resiliency and Reduce Costs for of Long-Running EDA jobs

2 Different challenges come together in this session: how to prevent long-running jobs from failing (server failure, race conditions) and how to reduce the cost of infrastructure, allowing engineers access to more compute nodes to run in parallel.

We will present a lab using Cadence Innovus with the ability to sustain disruptions and resume the work, but this solution applies to all Cadence tools.

Bernie Wu, MemVerge
Allan Carter, AWS

Micron's Cloud Journey: Architecting a Hybrid Model for Peak Demands

As Design cycles are compressing and on-premise infrastructure faces capacity limitations and lengthy lead times, Micron has embraced innovative solutions to optimize our operations. Micron has adopted a hybrid cloud strategy that enables us to run a significantly higher number of simulations within a short duration, which has improved our time to market and enabled us to share our on-premise compute resources more effectively and utilize our EDA licenses more efficiently. Our strategy of using cloud in burst mode allows us to expand compute capacity on demand, particularly during peak periods, while efficiently scaling down during non-peak times. This presentation will summarize how the different infrastructure tools and technologies have been integrated together to enable burst capacity to run EDA simulations, mainly verification jobs on cloud allowing better utilization of EDA licenses and improved sharing of on-premise compute resources. We will look at the opportunities that cloud created for us, the challenges and some enhancements that will help improve our experience.

Parikshit Karnik, Micron Technology
David Bukchin, Micron Technology

The Technology Foundation for Cadence Managed Cloud

Discover the next generation of chip design tools with Cadence's EDA-as-a-Service, powered by AWS and NetApp. Enjoy a seamless cloud-based design experience with industry-leading data management tools designed to meet the demands of global development organizations by delivering the right data to the right place at the right time.   Learn how only AWS with FSx for NetApp ONTAP enhances the developer experience by improving productivity and eliminating unnecessary wait times in EDA workflows.  Not only does the AWS and NetApp solution offer compelling value-add features, customers experience reduced Non-Recoverable Engineering (NRE) costs with more breathing room in your release schedule to accelerate or innovate.

Paul Mantey, NetApp

Custom/Analog Analysis

Application of Cadence Spectre Fast Monte Carlo in Timing Variation Analysis

Many modern designs exhibit profound statistical variation as a result of the underlying device manufacturing process effects and use Monte-Carlo based simulations to capture key parameters of the true distribution. For comprehensive design space exploration, design teams need to do a huge number of Monte-Carlo simulations to reach the necessary confidence level that all prominent variations are accounted for. The turnaround time for running such a large number of jobs can have an adverse impact on tape out schedules and hence there is a strong need to improve throughput but still meet the statistical accuracy requirements. Broadcom has been working with Cadence for over a year on applying Cadence-AI driven Spectre Fast Monte Carlo analysis to Broadcom’s timing related accuracy analysis project with good results and meaningful productivity gains. The use of Spectre multiple-processor mode further speeds up the runtimes, without any loss of accuracy.  This presentation will go over the background, problem statement of this project, the accuracy and performance data of using Spectre FMC, followed by a summary and conclusion.

Robert Chen, Broadcom

Can AI/ML and Analog Design Really Get Along?

In our daily lives, we have seen an explosion of AI-based applications designed to make our lives easier.  Can the same be said for analog design?  Custom design has always been about the blending of the "technical" and the "artistic" to get the job done. So how could AI/ML possibly do well with the nuances required in such design?  Spoiler alert: It can't.  But it can prove to be a very valuable assistant in areas that can benefit from the power of quick but massive computation and when AI has an embedded sense of what has come before, so it can make intelligent decisions about the necessary outcome. In this presentation, learn how Cadence is already enabling AI/ML in our analog tools, and with an eye towards the future, how Cadence sees AI being used to make us all better custom engineers and artists to develop a brighter future.

Roland Ruehl, Cadence

Design Shift-Left: Enabling Early SerDes Mixed-Signal Design Beyond 200 Gb/s

High-speed mixed-signal systems, such as 200+ Gb/s SerDes, use both analog and digital processing subsystems, often with complex interactions between them. The evolution of manufacturing processes and ever-increasing performance requirements for these systems drive increased mixed-signal algorithmic complexity for calibration and impairment mitigation. More complex analog/digital interactions lead to longer design cycles and a higher probability of design issues found late in the development process.

In this presentation, we will demonstrate how to left-shift the validation effort, that is: advanced it earlier in the workflow, by leveraging MATLAB/Simulink architectural models to generate behavioral models for mixed-signal IC simulation. Using models at different levels of abstractions we will show how to anticipate and isolate the impact of different type of impairments, providing deeper insights into your design choices.

We will present a methodology and a practical example where automatic model generation capabilities can be used to left-shift a subset of the circuit design earlier in the development process. Starting from a SerDes system architectural model, we will use the behavioral model of an interleaved analog-to-digital converter (ADC) to generate a SystemVerilog module, which we use instead of the actual circuit as the basis to design the ADC calibration scheme.

With the proposed approach, the design of the calibration system no longer depends on the finalized ADC IC design. Instead, the calibration algorithm and the ADC circuit may be designed in parallel, leveraging behavioral models to anticipate the interactions among the two systems working together. This workflow can be generalized and used to accelerate mixed-signal system design by allowing interacting subsystems to be simulated and validated in parallel.

Kerry Schutz, MathWorks
David Halupka, SeriaLink Systems

Dynamic Transistor Level Electrothermal Simulation To Evaluate On Die Heat Propagation For Optimal Thermal Stability

In semiconductor circuit simulation, thermal changes and their effect on circuit performance is usually simulated at a global level that affects all circuits on the die equally. While device self-heating can be enabled, this still does not take into account the effects of heating by other devices nearby that have a high power dissipation. Most tools or methods that model package heating due to power dissipation lack the ability dynamically change the electrical simulation, on the fly, in response to the changing thermal environment. Electrothermal transistor level co-simulation closes the gaps created by the existing, separate, solutions. The Legato Reliability Solution powered by the Celsius Thermal Solver was chosen to model the on-die temperature rise and heat spreading during electrical simulation and feed these differential temperature effects back into the circuit. Using this tool allowed us to create a methodology to evaluate the thermal movement on the die as the aggressor heat source changed and develop an architecture that met the requirements for circuit stability in this dynamic thermal environment.

Jason Christianson, Marvell Technology

NVIDIA GPU Accelerated Spice Simulation via Cadence Spectre X solution

The Spectre X GPU spice simulation flow is a novel solution that leverages the power of NVIDIA GPUs to accelerate analog circuit simulation. This flow enables faster verification of complex, post-layout designs, such as those found in SERDES, PLL, and ADC/DAC circuits. In this presentation, we will discuss how NVIDIA and Cadence collaborated to develop and test the SpectreX GPU spice simulation flow using both command line and the Cadence Analog Design Environment (ADE). We will show how ADE provides compatibility and flexibility for NVIDIA custom flows, providing a seamless user experience for launching and analyzing their simulation data. We will also demonstrate the performance gains achieved by using the Spectre X GPU spice simulation flow, which can deliver up to 5X speedup over traditional CPU-based simulators utilizing only a single V100 GPU. By adopting this flow, users can benefit from faster design sign-off, increased design coverage, and an overall higher quality for their analog designs.

George Kokai, NVIDIA Corporation

Spectre FX: Engineering a Faster and More Predictable Future for Full Chip Spice Analysis.

Traditional Full-Chip SPICE simulation method target mainly the power up and reset sequence category, owing to enormous SPICE netlist size and time consuming simulations. This limits the FCS setup to run functional tests which are otherwise critical to run to uncover electrical issues, such as high IDD especially in Low Power Mixed-Signal SoCs.

This study presents a comprehensive evaluation of the Cadence Spectre FX simulator within the context of System on Chip (SoC) full chip Spice simulation, emphasizing its superiority over other similar simulator tools. The Cadence Spectre FX, known for its advanced FastSPICE algorithms, provides a significant leap in performance and accuracy, particularly in handling complex, SoC designs that are prevalent in modern semiconductor devices.

In our evaluation, Spectre FX's capabilities are thoroughly examined, highlighting its innovative features such intelligent partitioning. These features collectively contribute to its expedited simulation times and enhanced precision, proving crucial for the intricate dynamics of full chip SPICE SoC simulations.

Furthermore, the application of Cadence Spectre FX in our SoC full chip simulation demonstrates a marked improvement in verification efficiency and design cycle times. The simulator's superior handling of high-frequency, high-precision components, integrated within our SoC architecture, underscores its pivotal role in advancing the state of SoC design and verification practices.

On Designs with more than 60 Million Gates, Spectre FX has proven quite efficient (more than 5X improvement compared to competitor tool) in running power up and low power checks. Additionally, the accuracy for measuring current consumption in low power modes has been fairly accurate, providing final results very close to data sheet specification.

This comparative analysis showcases Cadence Spectre FX's supremacy in facilitating more accurate, reliable, and efficient full chip SoC simulations, thereby offering substantial advantages for semiconductor design and verification teams striving to meet the escalating demands of next-generation SoC developments.

Jay Madiraju, NXP Semiconductors India Pvt Ltd

Custom/Analog Design

AI/ML Leveraged Analog Design Migration to GlobalFoundries FDX and FinFET Technologies : Enabling a Path for Technology-Agnostic Design Migration

This presentation is an overview of Cadence Virtuoso automated analog design migration between two different Globalfoundries technologies. We demonstrate schematic migration with a silicon proven OPAMP circuit test case between an FDSOI technology to a FinFET technology including design optimization post migration. We show that this approach improves the time required to migration designs in contrast to full redesign as well as issue encountered during the migration. In addition, we will discuss progress in Layout migration.

Jignesh Patel, GlobalFoundries

Analog Design Migration Using Virtuoso and ML/AI based Advanced Optimization Platform

As the semiconductor manufacturing process becomes more refined, the difficulty of designing for fine processes increases. As a result, various flow have been proposed, and the time required for flow is also increasing including simulation. In this situation, reusing existing design has become an important thing for faster design closure with minimum effort. We present the results of performing the analog schematic migration flow including schematic porting, optimization, and verification through Cadence Virtuoso Studio Platform. The entire flow has been validated by experimental results on Samsung Foundry FinFET technology.

Kihoon Kim, Samsung Foundry

Comparative Analysis of Standard Cell and Device Router Engines in Virtuoso Studio for Custom Analog Physical Design Efficiency

Custom analog and ASIC designs have different design objectives in the physical design stage, which leads to distinct characteristics in the development of automated tools. Cadence's Virtuoso Studio supports three modes for the routers: device, standard cell, and chip assembly, each utilizing a dedicated P&R engine within an automated flow.

This paper aims to compare and analyze the routing performance(speed/QoR) between using a standard cell P&R engine and the device level router for physical P&R design of custom analog designs. By identifying the optimal design methodology based on specific design characteristics, the efficiency of the physical design process can be significantly improved.

Yongjin Lee, Intel
Jennifer Kan, Intel

Custom Silicon Photonics Design Flow for GF FOTONIXTM

A custom electronic photonic design flow with a new photonics device design using GF PDK and ways to generate custom models to use in Photonic Integrated Circuits has been developed. The subsequent physical verification steps, i.e., DRC, LVS, etc., have been done with PDK recommended tools. A new device, DBR (Distributed Bragg Reflector) has been created [which is not part of PDK library] to have maximum efficiency in Ansys Lumerical, and a model of the same has been generated with the help of CML compiler. Simulation intricacies, Layout generation, Model generation procedures have been developed and put together to make the GF FOTONIXTM PDK flexible enough to allow customers to design their own devices using existing GF PDK features, supported software tools, and recommended methodologies.

Rais Huda, GlobalFoundries
Ramya Srinivasan, GlobalFoundries

Increasing Reliability in Automotive Project Top Cell with Design Intent and EAD

The reliability of automotive products is of critical importance to NXP and is therefore a main concern during the design and layout phase of its circuits. As a result, the possibility to couple the High Current and Max Voltage Drop functionalities of Design Intent along with the in-layout EM and IR Drop Checks of EAD (Electrically Aware Design) represented an productivity-enhancing opportunity to NXP. This presentation intends to cover the work carried out in a real case scenario of an automotive project within NXP, the substantial benefit found when bringing the two features together and the strategies put in place to enable this methodology. 

First, a general presentation of the custom analog flow of our automotive product line will be provided, with an overview of the challenges faced during tape-out phase to guarantee the reliability of NXP products and the motivations that led to adopting the methodology presented in the paper.

The schematic design aspect will then be covered, explaining how Design Intent was used to include hundreds of High Current and Max Voltage Drop intents in the top cell level. It will be pointed out the importance of a systematic verification of critical nets in our custom analog flow and how designers entered this information with Design Intent. The cooperation between NXP and Cadence to provide a Skill-based coding solution allowing for a quick loading and update of a massive amount of intents will also feature in this section.

In the following section, the layout construction will be detailed, highlighting how the data entered in the schematic was loaded in the layout environment and helped to drive the top cell physical implementation. The section will accentuate the benefit of having the High Current intent to calculate the EM and IR Drop checks of critical nets as well as the Max Voltage Drop intent to check on the fly if any violation exists. An example of the EAD browser clearly indicating the violations and helping the layout team to modify their implementation will be presented. In addition, it will also be provided the overview of methods to modify ICT files in order to extract the layout interconnect metals in specific conditions required by the automotive industry.

To conclude, this presentation will emphasize the positive impact that the use of both Design Intent and EAD had in the latest project tape out, the increased yield it guaranteed and the plan for future usage within NXP. It will also highlight the collaboration between Cadence and NXP to enhance the work methodology allowing to tackle the implementation challenges found.

Leonardo Konrad, NXP Semiconductors

Innovative and Intelligent Debugging, Optimization and Signoff Closure of Custom/Analog Designs.

This abstract introduces an intelligent debugging and optimization solution tightly integrated into Virtuoso Studio, aiming to shorten design debugging and closure runtimes. The solution leverages advanced algorithms with interactive GUI to automate error detection and optimize design performance. By seamlessly integrating with Virtuoso Studio, it provides designers with real-time insights and suggestions, accelerating the debugging process and enhancing overall design efficiency. This approach promises to streamline the design workflow and improve productivity in semiconductor design projects.

Louis Tanguay, Cadence
Hitendra Divecha, Cadence

Custom/RF Design

In-House EDA Tool to AWR Migration

Higher losses, reduced power handling, and inability to support wider bandwidths limit the practical use of SAW and BAW filter technology in the 3–6 GHz range. Fortunately, new acoustic wave filter solutions based on laterally excited BAW (LBAW) devices address these limitations above 3 GHz to address 5G FR1 band applications. Resonant, a subsidiary of Murata Manufacturing Co., Ltd., has developed its XBAR® IP portfolio, based on its LBAW technology from the ground up to meet the needs of current and future wireless communication requirements with best-in-class bandwidth and rejection. Resonant develops cutting edge XBAR technology used to design RF filters and modules making it possible to design user equipment for next-generation networks. Resonant’s core mission is to solve challenging problems through fundamental analysis and innovation, combining expertise in filter design, acoustic research, and high-performance computing to create products and intellectual property. In this talk, Resonant presents how they have adopted a design flow based on the AWR Design Environment platform to design their XBAR BAW filters, supporting their team of engineers to create device models, PDKs, layout footprints for RF design and manufacturing. Furthermore, the design team has recently adapted in-design analysis with thermal setups in Celsius and Microwave Office for thermal analysis based on power dissipation simulations from Microwave Office’s harmonic balance analysis and optimization of their filter designs.

Kaixing Li, Resonant

Modeling Technique for Wiring Stack Above Active Devices for Millimeter Wave Integrated Circuit Design

In millimeter wave (mmWave) integrated circuit (IC) design, accurate modelling of layout parasitics is important for accurate simulation of complex die structures such as the wiring metal stack of a multi-finger FET. This paper presents a unique modelling technique for a grounded co-planar waveguide (CPW) transistor feed network using Cadence EMX planar EM simulator, in which a port is allocated on each FET finger of the IC design. Validity of our proposed technique is confirmed by comparing the results with Cadence QRC layout parasitic extraction on a W-band low noise amplifier that was designed in Cadence Microwave Office with in-situ EM/circuit analysis performed by EMX. Differences in the simulated gain and noise figure are within 1 dB and 0.24 dB, respectively, in 81-86 GHz. The mmWave design demonstrates that accurate simulations of circuit performances are achieved, design iterations are reduced, and design resources are saved.

Daniel Mejia, MaXentric Technologies, LLC

RF Design Migration with Virtuoso Studio

RF design is becoming more critical in all industries – from mobile communications to industrial to consumer to aerospace. As the world gets more connected, higher bandwidth and increased sensing requirements are driving the importance of RF/high frequency design. The trend of moving to newer and newer nodes has been present in digital and analog design, but is becoming popular for RF design as well. Finally, the shortage of experienced RF designers is driving the need for more automation and migration in the RF space. This presentation outlines a front to back RF migration flow that includes schematic mapping, circuit optimization with parasitic scaling, passive device synthesis, in-design electro-magnetic (EM) simulation and assisted layout migration. We demonstrate the steps used to perform the schematic and layout migration of a 2.4GHz Low Noise Amplifier (LNA) from a TSMC 16nm to a TSMC 6nm process using Virtuoso Studio. The presentation concludes by reporting the results of the migration and highlighting future collaborative work.

Rachid Salilk, TSMC
Wilbur Luo, Cadence

Samsung Foundry 14RF RFIC Virtuoso Studio Reference Flow for RFIC/Package Co-Design with Proven 48GHz Design

The surge in communications systems relying on millimeter-wave (mmWave) frequencies has greatly increased the demand for efficient, optimized and proven RFIC solutions. Previously, an RF/mmWave design flow might depend upon multiple EDA tools provided by a multiple vendors. Such a flow is not optimal for getting market-critical designs completed efficiently and on-time. In addition, mmWave designs demand a system-level focus whereby the package & board environment must be considered from the design outset. A large portion of the RF design community requires a proven Foundry-provided RFIC system-level reference flow. In this paper we will present a proven seamless single-vendor Samsung Foundry 14nm RF mmWave Reference Flow based upon Virtuoso Studio and including system level budgeting, IC implementation, and Sign-Off including electromagnetic, reliability and EM-IR analysis for mmWave IC building blocks such as Power Amplifier (PA), Low Noise Amplifier (LNA), and RF Switch. Also included are tightly integrated system budgeting tools for initial system-level planning and also post-layout verification, proven on a 48 GHz RFIC tapeout.

Samsung Foundry and Cadence continue to lead the industry in offering compete RF design solutions through close collaboration in optimizing process node performance, PDK efficiency and EDA tool integration. These flows offer advanced methodology and unique performance in addressing RF design challenges such as predictability and design closure leading to a higher quality of RFIC.

Kihoon Lee, Samsung Foundry
KB Lee, Cadence

State of the Art Heterogeneous Integrated Packaging RF (SHIP RF)

The SHIP-RF program, sponsored by the Department of Defense (DoD), aims to foster cutting-edge microelectronics design and manufacturing expertise and leadership for defense applications as well as for commercial clients that require design and manufacturing of next-generation RF technology. In partnership with Cadence, Qorvo’s Design Center team has developed services, tools and Assembly Design Kits (ADKs) to enable Qorvo and its customers to achieve the advanced design and manufacturing success for next-generation RF technology. This talk provides an update on the status of Cadence ADK solutions into Qorvo’s SHIP-RF Assembly and Test Center (ATC) manufacturing rules that will allow comprehensive modeling and product simulation for crucial aerospace and defense applications.

Spencer Pace, Qorvo

System Budget to System Realisation- a 5G mmWave Beamformer Perspective on a 22fdx Process

The availability of advanced node silicon ICs for RF front-ends and highly integrated SiP technologies are enabling mmWave phased array systems for commercial applications.  This talk explores recent developments in design, analysis and implementation workflows supported by EM/thermal analysis, RF circuit/antenna co-simulation, and phased array synthesis to address silicon-to-antenna co-design. A comprehensive top-down system design methodology is presented and demonstrated with a Front-End Module (FEM)/Antenna-in-Package (AiP) design for 5G mobile applications targeting 24GHz-29GHz. Starting with a system simulation to perform link budget analysis of an FEM/AiP architecture defining individual block specifications, this talk then presents the design/simulation details of the front-end IC based on Global Foundries’ 22nm FDSOI process in Cadence Virtuoso followed by the design and analysis of the advanced package antenna/RF feed network with early floor-planning performed in Cadence® AWR Design Environment® platform, Cadence Clarity and ported to Cadence Allegro® Package Design Plus (APD+) for IC integration and full-chip/package verification. The FEM shows excellent correlation between simulations and measurements, both on wafer and packaged  and is further characterized post silicon by applying actual 5G signals in a real-time measurement that replicates the excitations used in the system simulation environment.

Andy Heinig, Fraunhofer IIS/GlobalFoundries

Digital Design 1

Accelerating Time to Market with an AI-Driven, PPA-Optimized Full-Flow

Achieving the ambitious goals of Time to Market (TTM), Power Performance Area (PPA), and cost for a chip design project requires a methodology that is not only intelligently explores the solution space but also seamlessly integrates architectural exploration, synthesis, and implementation. In this presentation, we extend the Cadence Cerebrus Intelligent Chip Explorer artificial intelligence (AI)-driven methodology for chip design to cover the whole design flow from MathWorks MATLAB to GDSII. The complexity of the methodology is encapsulated by Cerebrus, which runs autonomously to explore the solution space at each design stage. Generative-AI and machine learning (ML) models efficiently direct the flow that spans Cadence Stratus High-Level Synthesis, Cadence Genus Synthesis Solution, Cadence Innovus Implementation Solution, and Xreplay. PPA metrics are collected and reported by Cerebrus at each stage in the flow using Cadence Joules RTL Power Solution, Cadence Quantus Extraction Solution, Cadence Tempus Timing Solution, and Cadence Voltus IC Power Integrity Solution. PPA is optimized based on these metrics and several PPA-optimized implementations are provided by Cerebrus at the end of the flow. Based on the design goals the user can chose between results optimized for different metrics including power, timing, and area. 

To illustrate the flow, multiple architecture variants of a 500K-instance, parallel, high-speed, Fast-Fourier Transform (FFT) design are implemented and verified in MATLAB and translated to SystemC using MathWorks HDL Coder. The integrated Cerebrus environment automates the SystemC code to PPA optimized GDSII creation. The full flow automatization allows for the exploration of multiple architectural variants defined at the MATLAB stage, that have the highest impact on PPA. Cerebrus AI-assisted PPA optimization on all levels from algorithm to physical implementation improves design productivity and reduces turnaround time (TAT) from months to weeks or even days with minimal user intervention.

Michael Bruennert, Cadence
Tu Doan, Cadence

Automated Design Tools for a Superconducting Logic Family

Recently, superconducting digital circuits have emerged as a promising technology in the post-Moore’s law paradigm. Utilizing Josephson junctions as the active switching element, superconducting digital circuits rely on the quantization of magnetic flux (fluxons) to encode binary information. Reciprocal Quantum Logic (RQL) is a leading superconducting logic family that has demonstrated fast, ultralow power operation with wide operating margins. Furthermore, it has shown promising scaling properties producing some of largest, most complex superconducting digital circuits to date.

While previous demonstrations of scaling have relied on custom design and physical layout of circuits, further scaling of superconducting digital circuits have been hindered by the lack of electronic design automation (EDA) tools. EDA tools have been essential to the realization and maturation of very-large scale integration (VLSI) of CMOS circuits. However, superconducting digital circuits introduce novel challenges which make off-the-shelf use of standard EDA tools incompatible for superconducting designs. 

This paper introduces digital design using RQL and explores some of the difficulties encountered during synthesis, timing, and place & routing of RQL circuits. It also describes how these issues have led to novel solutions and features within the Cadence suite of EDA tools to realize digital superconducting circuits. With these advances in EDA tools, RQL technology is poised to achieve magnitude orders of scaling and realize complex superconducting digital circuits for fast, ultralow power applications.

Michael Vesely, Northrop Grumman

Benefits of HLS for DoD ASIC Development

Once conceptualized as futuristic, HLS (High Level Synthesis) tools are now mainstream for ASIC and FPGA design in development of commercial applications, relegating hand-coded RTL (Register Transfer Language) methods such as Verilog and VHDL to the past. Our paper details our experience using HLS to design an ASIC with a large scalability and high complexity data path design, comparing this with efforts to replicate the same design using Verilog and VHDL. We will provide area, power and development time metrics of both flows, supporting our conclusion that HLS tools surpass RTL hand coding in virtually all cases. The perception that HLS tools are too risky to adopt because of their novelty and abstraction of design control is overcome by dramatic improvements in development efficiency, and this gap will grow will anticipated acceleration using generative AI technologies in tandem with HLS.

Kirk Ober, Cadence

Compile Time Computation of Constants for High Level Synthesis

Signal processing algorithms often consist of an offline and an online component. The offline component involves computing fixed parameters that help define the algorithm, while the online component is the part of the algorithm that performs the actual computation at runtime. Typically, the offline parameters for a digital design are computed using a tool like MATLAB's or Python and the results are then manually copied to the design source code. Advancements in generalized constant expressions and template metaprogramming introduced in recent revisions of the C++ standard can eliminate the need for offline processing by embedding these calculations in the design’s SystemC source code where they can be computed at compile time. This presentation will discuss some C++ coding techniques that enhance the configurability and usability of C++ algorithms for high level synthesis with a focus on signal processing.

Evan Albright, Skyworks Inc.

End-to-End Cadence Deployment and Alignment in Ultra-High-Performance, Mixed-Signal DDRPHY IP

As memory bandwidth requirements increase dramatically for latest use-cases such as Generative AI, automotive, and compute products, the demand to deliver ultra-high performance DDR memory at low power is increasing at an unprecedented pace.

The complete and total alignment of front-end and back-end implementation flows becomes critical to achieve to meet aggressive product demands, and maintain industry-leading power, performance, and area. Using the Cadence Genus iSpatial tool flow, the Qualcomm DDRPHY IP team was able to left-shift critical analysis, perform advanced power optimizations, and catch critical implementation bugs weeks ahead of typical schedule.

DDRPHY is unlike similar digital-dominated IPs. The DDRPHY is dominated by analog and mixed-signal logic with multi-GHz clock frequencies. With tens of thousands pre-placed standard cells, the goal of the synthesis flow is to maximize alignment with back-end, while achieving the best PPA in the synthesized logic that is not manually placed and routed by the PD team. By enabling the iSpatial flow, we can correlate the placement, routing, and optimization of the ultra-high-performance logic in the design.

We also enable the Cadence Joules xReplay flow and Cadence Cerebrus machine learning flows to enable dynamic power optimization, automatic clock-gating insertion on paths without timing bottlenecks, and machine-learning driven optimization and recipe selection. This enables a major speed-up of recipe search space optimization, as well as automated clean-up of power bugs from the RTL.

The goal of this session is to demonstrate how the DDRPHY team aligns the front-end and back-end implementation flows, highlight the power of Cadence tool-flow deployment, and outline the time saved through left-shift of key bottleneck analysis in synthesis and machine learning.

Bryan Parmenter, Qualcomm

GenAI Driven Digital Full Flow Innovation and Roadmap Delivering Faster Design Closure

As design teams implement and signoff the most challenging designs across a wide range of foundry process nodes, the AI driven Cadence Digital full flow is continually improving to enable the best power and performance results for ever larger and complex system on chips (SoC). During this session we will discuss new digital full flow technology such as the latest 2nm features, compute architecture support, Cerebrus AI driven design optimization and JedAI Large Language Model (LLM) copilot . Join this session for a deepdive into the latest Cadence R&D digital full flow innovation.

Rod Metcalfe, Cadence

Network-on-Chip innovations to Optimize SoC Design with Genus and Innovus

The complexities in semiconductor and electronics development have escalated even further than expected in recent years and as a result, System-on-Chip and System-of-Chiplet architects face challenges in balancing the characteristics of memory, computing, accelerators, and connectivity. For architects, it is critical to utilize fast and accurate simulations to make early architectural decisions, especially in the context of Network-on-Chip (NoC) architectures. To avoid long iterations, the effect of layout and semiconductor-technology-dependent digital implementation aspects need to be considered as early as possible. For that, tools like Cadence Genus can better predict implementation flow outcomes, improving decision-making in chip architecture.

This presentation will discuss Arteris' efforts to make network-on-chip development physically aware, optimizing NoC topology for specific implementation requirements, automating the RTL generation process and delivering early placement guidance to digital implementation flows. This development helps address timing issues found after place and route (P&R) processes that typically require topology adjustments or pipeline registers. By abstracting technology characteristics, such as gate and wire delay, the new approach aims to avoid such iterative loops.

We will discuss automation opportunities in P&R for NoCs, emphasizing the importance of reliable early estimates on timing to reduce overall project schedules. Genus will be highlighted for its effectiveness in predicting implementation PPA (Power, Performance, and Area) more accurately, thus enabling faster development of implementable NoC configurations. The integration of Genus with Innovus for P&R, sharing the same engines, offers significant productivity boosts, providing close estimates for area, timing, and power, and enabling early detection and resolution of placement issues.

Frank Schirrmeister, Arteris

Digital Design 2

Automated Functional ECO Using Physical Aware Conformal ECO

Late stage Functional ECOs pose special challenges as a re-spin of the design is not feasible without schedule impact and the designs are mostly timing and routing converged. Manual implementation of ECOs is often the chosen approach as it is targeted and minimally disruptive. But it requires design knowledge and is often error prone, iterative and time consuming. Post Mask metal only ECOs provide additional complexity needing physically aware gate array and spare selection.

Partnering with cadence on physical aware Conformal ECO(CECO) has reduced ECO cycle time from week(s) to day(s). Patch optimization recipes have helped further optimize the CECO patch size to come close to manual eco. So, we gained the advantages of automation without compromise on ECO patch size.

This presentation talks about both premask and post mask capabilities of Conformal ECO and the usage of patch optimization recipes.  ECO size and other key metrics like runtimes, timing & routing are compared between manual and CECO approaches. Best practices for successful implementation of the patch in physical design tool (Innovus) are highlighted. The following case studies are demonstrated- eco cell placement, clock net handling, scan (SE/SI) connections, tie cell optimization strategies. Post Mask eco features are explored for both gate array and spare cell selection.

Sindhuja Sridharan, Marvell Technology

Challenges in Datacenters: Search for Advanced Power Management Mechanisms

Various power management methods aim to customize frequency and voltage to actual compute needs while minimizing power consumption. The problem is compounded by the fact that the applied voltage must also include guard-bands to mitigate worst-case environment, workload and aging scenarios that could degrade performance in the field.

AVS is a method that attempts to reduce clock rate while meeting system performance requirements by determining the actual required voltage for a given scenario. However, sensors and on-chip structures that the best known AVS techniques rely on to gauge and decide on this voltage cannot directly and accurately measure this, and thus some voltage guard-bands still need to be included . While this ensures the device reliability is maintained, it limits the voltage reduction.

AVS Pro is based on a novel approach and technology of monitoring the margin to timing failure of millions of real paths inside SOCs, under functional workloads, environment conditions and in face of latent defects, IR drops and aging. This allows voltage to be reduced to the lowest possible point, adapted to the conditions and workload the device is running under, while maintaining fault-free functionality. Since it is a HW closed-loop solution, it enables fast readjustment of voltage if and when the worst case scenario arises that demands higher voltage, essentially providing an inherent safety-net™.

The talk will review the Cadence implementation workflow for integrating proteanTecs AVS Pro technology, in use today at leading semiconductor companies. Leveraging Cadence tools, we employ novel methods, to implement proprietary complex IP. As a provider of both soft and hard IPs that are inserted into an existing functional design, proteanTecs streamlines the integration process by the user through the use of Cadence solutions.

Ziv Paz, proteanTecs

Rapid IR Closure with Voltus InsightAI

IR Drop is becoming a major issue on advanced nodes, with designers seeing an unmanageable number of IR violations at signoff. In this presentation, Cadence will present how to address this IR drop closure issue with Voltus InsightAI In-Design solution. Voltus InsightAI automatically prevents, and fixes IR drop violations in a very efficient way with minimal impact on PPA. We will share details on flow setup and results on user testcases.

Rajat Chaudhry, Cadence

Success with Compare_Recipe Technology for Abort TAT Reduction

Qualcomm has challenging design implementations which frequently result in Aborts where additional intervention is necessary.

Existing Abort resolution methods involve re-running analysis for each run. Solutions aren't precise because only methods and run time are specified.

Conformal's new Compare_recipe technology solves for an optimal abort recipe per module and allows specific recipes to be re-played, thus optimizing run time.

Jon Haldorson, Qualcomm

Generative AI Chip Design

Broadcom has Silicon Success on Multiple Designs Using Cadence Cerebrus Machine Learning

After a long and in depth evaluation of Cadence Cerebrus Machine Learning focused only on the floorplanning, placement, clock tree synthesis, and routing side of the chip design flow, we integrated Cerebrus Exploration and Replay into our Broadcom PNR Flow.  This paper focuses on how we took advantage of greatly improved PPA on multiple designs with the ultimate goal of area reduction and repeatable PPA improvements throughout the design milestones.

Michael Weafer, Broadcom

Cerebrus AI-Driven Power Optimization of the High-Performance Networking Chip in N3E Node 

This abstract explores the utilization Cerebrus AI–Driven power optimization of the High-performance networking Chip in the TSMC N3E node using Cadence Cerebrus. By leveraging Cerebrus, we identified and optimized key design parameters to achieve total power reduction without compromising performance. Cerebrus achieved an average total power reduction of 3% and a maximum reduction of 5% in the best scenario. The 3-5% total power reduction in the Multiple Instantiation Block benefited us by reducing 10% of the Total power consumption of the Full Chip. We also observed the Cerebrus increased our productivity with less Engineering Cost.

Mohamed Yousuf Abdul Kalil, Juniper Networks
Daman Soni, Juniper Networks

Enhancing RTL-GDS Implementation Efficiency with Cerebrus: A Case Study on Complex SoC

Design implementation and closure in modern semiconductor design pose significant challenges due to the inverse relationship between key metrics such as power, performance, and area (PPA), especially as designs move to lower technology nodes with higher frequencies and complex architecture. Traditional manual approaches to address these challenges are time-consuming and may not be efficient across varied architectures and usage scenarios. This paper presents a solution using Cadence’s Cerebrus, an integrated machine learning-driven flow automation mechanism, to optimize RTL-GDS implementation flow. Through a case study on a complex SoC , we demonstrate how Cerebrus streamlines the design flow, automating tasks such as recipe generation, flow execution, and data analysis. Model-replay capabilities within Cerebrus significantly reduce iteration cycles in the physical design phase, leading to improved productivity and efficiency while meeting PPA.

This paper focuses on Cerebrus usage in RTL-GDS implementation flow using Cerebrus on various nodes of intel projects. Ability to use model-replay using Cerebrus saved several churns of physical design cycle and along with eco cycles.

A. SoC : Sub 6nm Node

 Phase-1: Cerebrus exploration started with 7-blocks for mid-level RTL. At the end of cerebrus exploration, on average we see 20-30% TNS gain along with 10-15% of leakage improvement. In some blocks we saw 3-5% density gain.

 Phase-2: In 2nd Phase,  pre-final RTL release  we explored 8-blocks with few new blocks and explored cerebrus all the way to tempus signoff. In this Phase we did see huge benefit in terms of TNS and leakage as well as density gain. These blocks were also using models generated from previous stage which makes the exploration faster. 

 Phase-3, we ran Cerebrus for 17 blocks in our final RTL release. The models we used for these runs were generated in our initial mid/pre-final version of RTL.

 For some blocks, multiple models were provided in the Cerebrus recipe. On average we noticed 20-30% TNS improvement and 10-15% of leakage improvement for the runs. We also observed Density gain in 2blocks which led to DRC count reduction. 

B. SoC  : Sub 3nm Node 

Ongoing project has 15blocks planned for cerebrus exploration. Few of the blocks in initial trial showed 20% of leakage and 30% of TNS gain with 1% of density improvement.

Overall Cerebrus usage has helped to close the design faster with better recipe which saved thousands of manual experiments, particularly regarding power, performance, and area (PPA) metrics. The paper introduces Cadence's Cerebrus solution, a machine learning-driven flow automation tool, and outlines its role in optimizing the RTL-GDS implementation process. Through a case study on Intel SoCs, the abstract illustrates how Cerebrus automates tasks, reduces iteration cycles, and enhances productivity and efficiency while meeting PPA constraints.

Akshay Bhardwaj, Intel
Mrugen J. Purohit, Intel

Leveraging Generative AI to Accelerate "Correct by Construction" Design

Leveraging generative AI technology in the field of LSI design, combined with machine learning models and domain-specific knowledge, significantly increases productivity, reduces design iterations, and ensures timely delivery of LSIs to customers. 

Traditional semiconductor chip design typically begins with a long and labor-intensive process of defining specifications, creating RTL models, and documenting them before the engineering team begins designing the actual circuit. These processes typically take months and are manual and subject to error. However, with the appearance of generative AI like ChatGPT, what was once considered a pipe dream is becoming a reality.

This is the goal of new tools enabled by AI-Driven Data Analytics developed by Cadence Design Systems as the Cadence Joint Enterprise Data and AI(JedAI) Platform. By employing large-scale language models and generative AI, the tool is expected to help eliminate heavy labor and human error in the early semiconductor definition and design verification stages.

This time, we investigated whether it is possible to grasp the design issues using actual cases detected during design, and whether it is possible to evaluate the validity of various design products created manually. We are conducting various tests, such as whether it will be able to help you create what you previously handled manually. This tool is still under development, and enhancements to its functionality and performance are ongoing. 

In this way, this new generative AI enabled EDA tool is expected to be a technology that opens up the future, and our company has been involved in its development from an early stage and is proceeding with joint development by incorporating Renesas' know-how. As a fact, ensuring specification and design consistency is critical, and verification costs are increasing as design features become more complex.  Renesas and Cadence are leveraging Generative AI's LLM capabilities to address this challenge. We look forward to new approaches to effectively manage design quality and address the challenge of significantly reducing the time from specification to final design. We want to make this evolution a reality.

Koji Hirakimoto, Renesas Electronics Corporation

PPA Push on Complex Designs with Cerebrus Solutions

With ever increasing demand for PPA metrics and multiple chips taping out in short span of time, we need a tool which can deliver best PPA without much intervention from user. As workload increases, we need smarter ways to explore design and tool options to gain productivity and achieve best QoR. Cerebrus provides a solution for this problem by giving user flexibility to explore and tune PPA recipes. With easy-to-use model, it can easily integrate in a standard implementation flow and makes it easy to execute on any design. It supports several operation modes such as cold and warm start to give fine-tuned model for each design throughout the design cycle. With this model on an average, we have seen good gains in area, timing, and power. Incorporating Cerebrus models to main production run is also easy and we see good scalability in results. 

In this session we will go over how to use Cerebrus  in front-end Genus flow to achieve best PPA for a module. It will also cover various modes of operations,  including  system primitive and user primitive modes. In the end,  strategy to tune Cerebrus for optimal resource usage is discussed.

Alice Chan, Qualcomm

Verisium SimAI: Coverage Gaps Meet Their Match

Learn how to optimize verification productivity and efficiency with AI-driven verification. Verisium SimAI streamlines regressions, enabling more efficient use of your simulation cycles while maximizing the coverage and rooting out bugs. Additionally, explore Verisium AI’s role in enhancing the debug experience, where intelligent algorithms work as an engineer’s copilot to root cause failures quickly.

Paul Graykowski, Cadence

In-Design Electrical Analysis

112 Gbps PAM4 Interconnect Models Simplify Channel-Wide Modeling and Simulation

112 Gbps PAM4 data rates are commonplace in AI/ML, HPC, quantum computing, and date center equipment applications. In many instances, high speed signals are routed from one PCB to another via high-performance front-panel, mezzanine and backplane interconnects.  At lower speeds, SI simulations using separate, cascaded models for each segment of a design (Die-Package-Breakout-PCB-Breakout-Connector-Breakout-PCB-Breakout-Package-Die) have provided acceptable simulation results.  However, 112 Gbps PAM4 and faster applications must look at the PCB-Connector interface as a single segment in order to achieve accurate results.  This presentation will discuss how Samtec and Cadence are collaborating to provide high-performance interconnect models that can be merged directly with Cadence PCB designs and support the next wave of high-speed simulation challenges.

Matthew Burns, Samtec

High-Performance Clarity Project Demonstrating Simulation-Measurement Correlation to 50GHz and Beyond

While working with a host of customers this last year we continue to hear “simulation-measurement is a bit of a black art”, or “no, we don’t really close the loop on our high-speed design process”, or maybe “we thought our measurements were really good, no we have not questioned that…”, etc.

Senior veterans from Cadence and Wild River Technology (WRT) have teamed up the last 2 years to (WRT) address fundamental problems of practical electromagnetics using Cadence Clarity utilizing the WRT Channel Modeling Signal Integrity tool, the CMP-50. Topics covered will be, the influence of measurements techniques and fabrication on the physical side and boundary conditions and material identification for simulations. Do de-embedding measurements ensure good simulation to measurement correspondence, what are the other options? An endemic not well-known issue is measurements, which will be discussed in terms of correspondence. We will conclude with guidelines to improve physical measurements and recommendations to ensure correlated Clarity electromagnetic simulations.  In this session we demonstrate excellent correspondence over a very wide frequency range through understanding the limitations of both our measurement and simulation setups. In addition we will briefly cover some of the new work we plan to pursue in 2024 to both further extend the frequency range over which we can demonstrate, how anisotropic material properties can kill your design,  and  to increase the understanding of how crosstalk propagation occurs in today’s SERDES interfaces

This is a hard-hitting practical discussion relevant to all engineers focused on high speed design focused at the system level.

Alfred Neves, Wild River Technology

How to Signoff Multi-Chiplet High-Speed Interfaces for Signal Integrity Compliance

Using a case study from a chiplet-based SmartNIC platform composed of a CXL I/O hub and two eight-core RISC-V processors, learn how an ecosystem came together to successfully design this system.  The case study includes an internal high-speed interface, Bunch of Wires (BoW) Open Die-to-Die (D2D) standard where each D2D link has a bi-directional bandwidth of one Tb/s. In addition, the platform supports network connectivity with x40 PCIe Gen-5 and 800 Gb/s (x8 112Gbps) Ethernet, both of which can be flexibly configured. This three-chiplet SmartNIC SiP is realized in an organic substrate. In this talk, we present a signal integrity signoff methodology for the three high-speed interfaces on this platform, using the Cadence design and analysis tools.

Suresh Subramaniam, Apex Semiconductor

In Design Electrical Analysis Design Layout with Intention Versus Post-Design Modeling/Rework

The traditional roles of a layout designer and electrical engineer are being reassessed as advances in package design software expand to support more functionality within the layout editor application. Designers are being tasked to do more electrical considerations and trade-offs earlier in the design creation phase to support an improved electrical SI/PI final signoff verification. Electrical engineers are now requested to define the SI/PI architecture up front and then delegate the iterative electrical tuning processes to the layout designer. While this might appear to create an overlap of electrical analysis, however, the discipline of each role is differentiated by the type of deliverable. 

Initially, the electrical design process was thought to be achieved through a best practice style of trace routing followed by an electrical engineer's modeling and feedback, resulting in numerous redesign cycles. However, with the need for advanced SI PI package performance, the focus has turned to the necessity for intentional routing strategies, which help to reduce the amount of impact the redesign rework has on the design layout, thereby reducing the overall number of detailed SI/PI performance signoff style reviews. 

With today's emphasis on creating high-performance and cost-effective advanced package design solutions, the designer and the electrical engineer will benefit from the electrical modeling guidance provided by Cadence's (In Design Analysis) software tools. The simplified electrical modeling workflows that are now available for use by the designer have enabled the Electrical Engineer to focus more on defining future electrical IDA architecture configurations and enhance the trusted verification modeling signoff process.

Jonathan Micksch, Amkor Technology, Inc.

Meeting Future System Thermal Performance Demands Through Packaging

As electronics get smaller and faster, the environment for thermal issues is becoming more and more challenging. These problems are widespread and can be in the chip, the board, the package and the entire system. Design challenges not only can impact the performance of the chip, but it can also effect package and PCB performance due to resistive losses. It’s important to note that these resistive losses are also temperature dependent, which makes the IR drop analysis a must. 

Also as electronic designs are shifting toward 2.5D and 3D-IC design, thermal challenges are exacerbated because of the multiple dies that are densely packed, which leads to heat generation that results in a rise in temperature. Having a clear understanding of the thermal constraints in the early stages of a design is a must to avoid long design cycles. Learn how Chipletz used Celsius PowerDC  reliable power delivery for their advanced IC package designs, including electrical/thermal co-simulation for optimized accuracy.

Jeff Cain, Chipletz

Optimize EMX Resource Usage with Machine Learning

Electro-Magnetic (EM) simulations require a lot of computing resources, including CPU core and memory usage. Proper resource management for EM simulations is required to achieve the best performance with balanced computing resources. In this presentation, we will discuss how we use machine learning algorithms to manage our computing resources with Cadence EMX. We will go over how we break EMX simulation into phases where we can predict simulation time, memory requirement, as well as CPU core usage for the best efficiency to minimize overall resource usage.

Bin Wan, Skyworks Solutions

System-level C-PHY High-Speed Signal Integrity Analysis for Mixed and Virtual Reality Systems

Mixed and Virtual Reality (MR/VR) systems present challenging design specifications regarding form factor, weight, dense routing, and meeting EMI/EMC standards. These systems encompass an array of rigid flex printed circuit boards (RFPCs) with high-speed signals that must contend with signal degradation, impedance and return path discontinuities, in addition to other traditional signal integrity challenges. These issues require careful design tradeoffs for key parameters such as the stackup layers/zones, signal routing and termination, etc. to ensure signal fidelity.

In this paper, we are partnering with Meta to explore a simulation methodology targeting a MIPI C-PHY interface, focusing on the high-speed routing across various interconnects and transitions within a MR/VR system. The goal will be to evaluate the impact on signal integrity caused by reference planes transitions and the interconnects between different layered structures. This MR/VR system consists of six RFPCs and three interconnects that will be modeled, meshed and extracted with Clarity 3D Solver. We will also investigate the Pogo Pin interconnects and the use of a “virtual ground” at adjacent low-speed GPIO control signals. The simulation results will then be utilized to validate the MIPI C-PHY compliance with SystemSI. The interface compliance will consist of time-domain results such as eye diagrams that include both the passive interconnect and a time domain stimulus. This proposed simulation methodology provides a comprehensive approach to signal integrity analysis for C-PHY interfaces.

Shiv Agarwal, Meta
Raul Stavoli, Cadence

In-Design Mechanical Analysis

Automatic Adjoint-Based Design Optimization for Laminar Combustion Applications

We present an open-source and flexible framework for automatic adjoint-based design optimization for laminar combustion devices. The framework allows for multi-objective optimization of various key performance indicators like heat transfer and pollutant emissions. Geometry and mesh deformation is performed based on surface sensitivities computed from the discrete adjoint solution of the reactive Navier Stokes equations, and algorithmic differentiation is used for the gradient calculations. The flow solution is obtained from the preconditioned variable-density Navier-Stokes equations in the low Mach number limit, and combustion is modeled using a flamelet approach for laminar premixed conditions. Reaction chemistry, thermodynamics, and mass transport are parameterized with a progress variable and the total enthalpy. To increase the accuracy of pollutant emissions, additional transport equations for CO and NOx are solved. The framework is built on the foundation of several open-source applications to calculate CFD and adjoint solutions, obtain geometrical sensitivities, and perform free form deformations. To ensure high mesh quality, an automatic re-meshing procedure has been applied by coupling Fidelity Pointwise by Cadence in the optimization workflow. The optimization framework is demonstrated by simultaneously minimizing CO and NOx emissions as well as the outlet temperature of a steady, laminar, premixed methane-air flame in a simplified 2D model of a burner and heat exchanger with strong flue gas cooling. The optimized geometries and the impact of the objective weight factors on the pollutant emissions, thermal efficiency, and the shape change are investigated.

Conclusions:

A framework for design optimization of laminar premixed combustion problems using a discrete adjoint approach was developed and published as open-source. The procedure can be used to minimize emissions, temperature, and any other quantity of interest that can be derived from the solution variables. A tabulated chemistry method based on progress variable and enthalpy has been implemented for incompressible laminar premixed flames in the general case of strong heat losses. Algorithmic differentiation enables easy extension with additional transport equations for other species or to define new objectives. We demonstrate the design optimization procedure by minimizing mass flow averaged outlet CO and NOx emissions, and temperature in a simplified gas boiler consisting of a burner and a heat exchanger. The design optimization resulted in more optimal designs with lower CO and NOx emissions and better thermal efficiency. The resulting designs are outside of the range of designs that would have been found in a parameter study with a limited number of degrees of freedom in the design. In a multi-objective optimization, the objectives are usually competing, leading to Pareto optimality where one objective cannot be minimized without sacrifice to another objective. By using weight factors for the objectives, the priority of the objectives can be controlled, and we demonstrate this with eight different cases. The degrees of freedom and the smoothness of the design can be controlled with the nodes of the FFD box.

Daniel Mayer, Bosch

Cadence Fidelity CFD: Accuracy, Acceleration, Automation, and AI

Learn about several enabling capabilities currently available or soon to come in the Fidelity CFD platform. The combination of robust CAD import, wet surface detection, autoseal, and surface and volume meshing performance improvements can now very significantly reduce the time required for preprocessing large complex models. The GPU accelerated Fidelity LES Solver with its novel Voronoi-based mesh generation is now available to predict challenging flows involving aeroacoustics, complex separation, and combustion. Finally, we’ll explore the recently announced Millennium M1 Multiphysics Supercomputer including how massive acceleration enables AI-based modeling for system simulation.

Frank Ham, Cadence

Get the Heat Out - Our Experience Solving Thermal Problems with Celsius EC

Smaller, Faster, Cheaper. These forces drive us when designing electronics. Meeting these market needs puts lots of challenges in front of us. One of which is heat. The increase in performance means increased heat generation and the smaller form factors mean less space to dissipate that heat. At the same time, cost pressures reduce our options and availability of dedicated heat management devices. This paper will discuss our experience using Celsius EC to model and identify an overheating issue in a consumer product. Analysis in Celsius EC allows us to identify a small design adjustment to resolve an overheating problem without significant changes to product form factor and cost. We will review our process of building the thermal model, our analysis, and the subsequent design changes and examine the correlation to actual results when the change was implemented and tested in the product.

Ted Larson, OLogic Inc

Using Digital Twins to Optimize the Implementation of AI Compute Clusters

Just as Artificial Intelligence is disrupting the business world, the computational requirements to support AI are disrupting the design of data centers. In this presentation, we will discuss how the use of digital twins can help enable data center operators to add AI compute clusters to their existing facilities without compromising performance or reliability.  We will focus on the challenges of integrating liquid cooling and hybrid liquid / air cooling systems into existing data centers that were not designed for high density compute loads, and we will show how Cadence Reality DC can help to optimize the placement, configuration, and operation of AI compute clusters.

Steve Blackwell, Vertiv

IP

Design and Verification of a Chiplet Reference Platform in Samsung 5nm Based on Tensilica Cores

This presentation will introduce the design of a Chiplet reference system in the Samsung 5nm. Chiplet systems are currently a major trend in the area of system development. Chiplets allow the construction of systems in which the total chip area is significantly larger than the maximum size of a single circuit. To develop a chiplet system, the system concept must first be created. This will be presented in the lecture. After the system concept has been created, the chiplet system is implemented. For this purpose, the chiplet interface must be developed, which corresponds to a mixed-signal design. The entire system is a digital multiprocessor system which consists of different Tensilica cores. A DDR and a PCIe interface are also included. The entire system is implemented using Cadence Tools. Furthermore, the package must be developed, for which the Cadence package tools are used. The circuits are then verified. On the one hand, the digital and mixed signal verification tools are used for this purpose. The package is also used with the Package Verifications Tools.

Andy Heinig, Fraunhofer IIS/EAS
Kevin Yee, Samsung

How Standards-Based Protocols are Imperative for the AI Workloads of Tomorrow

This past year, Generative AI became a phenomenon and made AI a household word. In this presentation, we will discuss the key market trends driving HPC and AI and the demand for newer SoC, chip to chip and module architectures that address the needs of this space. As the need for performance and computation capability increases, standards bodies and IP implementers rise to the challenge to provide solutions.  We will examine important standards in memory such as the latest LPDDR and HBM versions along with key interface standards such as 112G/224G, PCIe and CXL, and chiplet and die-to-die interfaces such as UCIe that are critical to these new architectures for HPC and AI products. Attend this talk to learn about the unique architecture requirements and how the right selection of IP can enable successful designs.

Arif Khan, Cadence

Navigating Challenges: Chiplet Integration in the Automotive Realm

The automotive sector consistently leads in technological advancements, embracing innovations across Electric Vehicles, Autonomous Driving, Connectivity, and Sustainability. Today's vehicles rely significantly on electronic systems, comprising a major portion of their structure and functionality. The advent of Chiplet technology has empowered the creation of modular and scalable electronic control units, allowing major functionalities to work in tandem. However, this advantage also introduces new challenges, including: 

  • Continuous monitoring of interconnect signal quality and self-repair capabilities
  • Compliance with functional safety standards
  • Optimization of overall system performance
  • Ensuring cross-vendor compatibility

In this engaging panel discussion, we will count on automotive industry experts who are at the forefront of shaping the requirements for future system-on-chip architectures. We can ensure a very stimulating discussion by asking them to share insights into the current challenges and strategies for defining scalable Chiplet-based systems.

Anunay Bajaj, Cadence
Pratibha Sukhija, Cadence

Optimizing Chiplet Development with Efficient Integration of Networks on Chips and Die-To-Die Controllers

In the evolving landscape of semiconductor design, the shift from Systems-on-Chips (SoCs) to Systems-of-Chiplets (SoC²) marks a significant transformation, driven by both yield limitations of designs approaching the reticle limits and the need for a flexible, standards-based ecosystem of interoperable chiplets that allow the efficient combination of different technology nodes. This presentation will illustrate the intricacies of this transition, with a particular focus on integrating network-on-chips (NoCs) and PHYs and controllers enabling the era of chiplets. The burgeoning field of AI/ML applications, known for their complex and scalable computing architectures, stands at the forefront of this evolution.

The development of physical interfaces such as UCIe, BoW, and XSR, and their link-layer control mechanisms, are pivotal in enabling chiplet interoperability and the orchestration of ""Super-NoCs"" across multiple chiplets for efficient data transport in chiplet-based designs. The protocol layer, encompassing NoC protocols like AMBA AXI and CHI, emerges as a critical element in this paradigm. This presentation will underscore the escalating challenge posed by the increasing computing throughput across chiplets and memories. Addressing this, this presentation emphasizes the vital role of coherent and non-coherent NoC architectures in augmenting AI/ML computing, aiming for optimized performance, power, and cost.

We will explore data transport challenges specific to AI/ML inferencing and Generative AI applications within both SoC and SoC² frameworks. It will showcase efficient NoC IP development frameworks, crucial for early architecture optimization and physical integration, connecting seamlessly with industry-leading digital implementation flows. By examining case studies from ADAS and AI/ML domains, the paper will provide insights into the synergistic relationship between Arteris networks on chips and Cadence chiplet controllers and PHYs, highlighting their collective impact on future semiconductor design and performance.

Guillaume Boilet, Arteris

UCIe: Catalyzing Chiplet Ecosystem Adoption and Shaping Semiconductor Solutions

The widespread adoption of Universal Chiplet Interconnect (UCIe) is revolutionizing the semiconductor landscape, particularly in the realm of chiplet ecosystems. This presentation delves into the transformative impact of UCIe, comparing and contrasting its significance before and after its introduction. We explore the pre-UCIe landscape, highlighting limitations and challenges faced by traditional die-to-die interconnect solutions. Through a comprehensive analysis, we reveal how UCIe has emerged as a game-changer, enabling seamless integration and fostering collaboration within the chiplet ecosystem.

Key Points:

Pre-UCIe Era vs. Post-UCIe Paradigm Shift: Delve into the challenges and constraints of die-to-die interconnect solutions before the advent of UCIe. Contrast this with the transformative capabilities and enhanced efficiency ushered in by UCIe, illustrating its pivotal role in reshaping semiconductor integration.

Comparative Analysis of UCIe: Explore the unique features and advantages of UCIe in comparison to other die-to-die interconnect solutions. Highlight the superior scalability, performance, and versatility offered by UCIe, positioning it as the preferred choice for modern chiplet ecosystems.

Outlook on Standards Adoption: Provide insights into the future trajectory of standards adoption, with a focus on UCIe. Discuss the anticipated benefits of widespread adoption of UCIe, emphasizing the positive implications for the overall semiconductor solution space.

Unlocking Potential: Emphasize the broader implications of UCIe adoption, including enhanced innovation, accelerated time-to-market, and greater flexibility in semiconductor design. Illustrate how UCIe serves as a catalyst for unlocking the full potential of chiplet ecosystems, driving unprecedented advancements in semiconductor technology.

Conclusion:

In conclusion, the adoption of UCIe represents a significant milestone in the evolution of semiconductor integration. By enabling seamless connectivity and interoperability within the chiplet ecosystem, UCIe is poised to redefine industry standards and pave the way for a new era of innovation. As we look towards the future, embracing standards such as UCIe holds the key to unlocking unparalleled opportunities and driving transformative change across the semiconductor landscape.

Mayank Bhatnagar, Cadence

ZenVoice Nano: Ultra-Efficient Deep Noise Reduction Algorithm On HIFI DSPs

We present a new deep noise reduction algorithm, ZenVoice Nano, which has been successfully  deployed on multiple Tensilica Hifi DSPs including 3/3z/4/5, with crystal-clear voice quality even in the most demanding hardware conditions.  This technology merges sophisticated deep learning methods for reducing noise and maintaining robustness, leveraging our internal foundation models and generative models. Consequently, Zenet Nano is capable of operating at a signal-to-noise ratio (SNR) as low as -3dB, yet delivers a noise reduction advantage of 25dB for nonstationary noise, rivaling the effectiveness of models ten times its size. For context, the most advanced commercial TWS DNR systems currently require an SNR greater than 3dB and typically achieve less than 10dB noise reduction for nonstationary noise, to our knowledge.

This groundbreaking approach marks a significant step forward in making sophisticated audio intelligence widely accessible in real-world applications.

ZenVoice Nano delivers a compact footprint without sacrificing performance, rivaling PC-based noise reduction models in efficiency. The model’s memory footprint ranges from 60 KB to 300 KB depending on signal to noise ratio requirements. This allows us to bring superior deep neural network (DNN)-based noise reduction to smaller devices than ever before. Such a tiny footprint allows ZenVoice to run real-time noise elimination on those stringent hardwares.

Our solution boasts minimal latency ”just 20 milliseconds” alongside full-band audio support up to 48kHz. Coupled with its modest computational demand of fewer than 300 million MAC/s, this makes it an ideal choice for deployment on low-power, compact devices.

ZenVoice Nano is designed for integration in a wide array of auditory devices, including TWS earbuds, hearing aids, firefighter helmet communication systems, and virtually any other hearable technology. Its successful deployment across numerous chip architectures, such as DSP, NPU, ARM, and RiscV, has led to multiple design wins. We believe that its ultra-efficient structure can enable many more applications which require better battery life and lower cost.

Yuan Lu, Aizip

OpenEye

Improving Drug Discovery Through Computational Methods at Cadence

As we all witnessed in the recent pandemic, diseases impact everyone on the planet despite our best efforts. One of the most efficient means of combating disease is discovery, development, and distribution of pharmaceuticals. Creating new pharmaceuticals, whether small-molecules, biologics, macrocycles, or degraders is a long and costly endeavor usually taking more than 12 to 15 years and $1Bn to $2.5Bn in investment. The process is challenging because much is unknown about human biology, therapeutic disease targets, and predictive delivery, efficacy, and toxicology. Together these uncertainties lead to a high degree of risk and nine of ten drug candidates fail in clinical trials. At Cadence Molecular Sciences, we capitalize on the combined strengths of OpenEye’s twenty-five years in drug discovery with Cadence’s deep technical expertise and broad industry connections to accelerate innovative development of scientific software. The power of cloud-scale, GPU acceleration, AI and efficient molecular simulations have dramatically increased the impact of computation on drug discovery and development. We orchestrate these technologies to help customers identify new therapeutic targets, screen billions of compounds for initial binders, optimize those hit molecules into lead series and predict the effective pill form of eventual clinical candidates. Under Cadence leadership and through customer collaborations, we are expanding our tools from our strength in small-molecule drug discovery toward cryptic pocket identification, cryogenic electron microscopy, biologic therapeutics, formulations, and generative AI via integration of NVIDIA’s BioNeMo. Together, these exciting advances enable pharmaceutical and biotech companies to discover and develop the therapeutics of the future.

Geoff Skillman, Cadence Molecular Sciences

PCB Design

Accelerating Engineering Workflows with Autodesk & Cadence

As manufacturers grapple with the complexities of digital transformation within their unique contexts, many question its relevance and application. In this presentation, we introduce the Autodesk Fusion Industry Cloud as a solution to support end-to-end workflows for connected product development, offering remarkable productivity gains and competitive advantages. We will also delve into the strategic partnership between Autodesk and Cadence that is reshaping PCB design and 3D modeling workflows. We’ll showcase the integrations between Autodesk Fusion and Cadence Allegro X and OrCAD X, integrations that empower mechanical and electronic engineers to work together more efficiently, representing a significant shift in PCB design practices. Join us to understand how the Fusion Industry Cloud and the Autodesk and Candence integrations can accelerate your digital transformation journey.

Trent Still, Autodesk

DesignTrue DFM

Intel's implementation (previous/existing methodology?), necessary changes and rule development to enable Design True DFM adoption.

DFA tables had the ability to check component to component spacing and now with Design True DFM we are able to check DFA, DFM and DFT live during PCB layout. This has brought Designers, MFG Engineers and Test Engineers knowledge into a standard configuration file that allows for automated checking and good by design methodology.

We intend to discuss the following topics:

  • Cadence enablement's of:
    • DFA
    • DFM
    • DFT
  • Intel had hundreds of design rules that needed to be converted to be compatible with Design True DFM
  • Collaboration with Layout, MFG and Test engineers to validate and approve this new process.
  • Benefits (methodology & process) -enabled PCB designers to make MFG changes as the board was being laid out instead of waiting towards the end and having violations thrown over the wall.  
    • Shift-left approach: enabled PCB designers to make MFG changes early on.
  • Overall, this has improved quality of our layout designs while improving Assembly, Fabrication and Test time, improving design throughput times.

Gary Kipp, Intel Foundry
Sam Dalrymple, Intel Foundry

How Will Artificial Intelligence Influence PCB Design?

First, we will attempt to define artificial intelligence (AI). Hint, it is far richer and diverse than what the headlines show. Next, a bit of a history lesson, then we will summarize the current state of the art by examples from outside and inside of EDA.  Ultimately, we will try to answer the question "How will AI change the way I do PCB design?

What is artificial intelligence (AI)? Hint, it is far richer and diverse than what the headlines show. After a bit of a history lesson, we will summarize the current state-of- the-art using examples from outside and inside of EDA.  Ultimately, we will try to answer the question "How will AI change the way I do PCB design?

Taylor Hogan, Cadence

IPC-2581: Expedite NPI with Smart Design Data Hand-Off

IPC-2581 provides a unified PCB design data format, enabling seamless data exchange between the designers and fab houses. It directly integrates the design files into the CAM system and facilitates a quicker new product introduction (NPI). 

In this presentation, you'll learn the benefits of IPC-2581 over Gerber and ODB++.

PCB design files include a schematic, netlist, stack-up, assembly drawing, and special instructions, if any. Managing and maintaining multiple files increases the risk of data discrepancies. IPC-2581 (IPC-DPMX) encapsulates design, assembly, and fabrication data in a single file, eliminating the need for multiple documents. 

IPC-DPMX simplifies the transfer of stack-up details, including layer thickness, impedance requirements, and tolerances. In revision C, controlled impedance analysis is outlined, covering transmission line geometry, line width, spacing, and tolerance attributes. 

The bi-directional DFX module of the design data format allows a two-way exchange of information. It enables you to receive feedback on issues from the manufacturer and fix them in real-time. 

The IPC-DPMX provides enhanced traceability throughout the design and manufacturing processes. It keeps track of design rules and constraints, making the DRC process a breeze. This is makes it easy during re-engineering. 

The IPC-2581 format automates machine-to-machine communication, eliminating the need for manual data entry and verification. This not only streamlines the manufacturing and design processes but also guarantees the highest productivity, addressing the needs of Industry 4.0

In this presentation, our experts will show you how to accomplish an intelligent design data hand-off using IPC-2581.

Amit Bahl, Sierra Circuits

Paradigm Shift - How AI Will Change Hardware Design

PCB layout traditionally has been a labor intensive and time-consuming manual process, heavily relying on layout designer's accumulated knowledge and skill.  Limited by the human speed, the total span of layout time has been the bottleneck of hardware design process (even with design partition with multiple parallel and serial shifts.)  Based on our hands-on experience collaborating with Cadence on X AI since 2022, we can foresee the potential of accelerate the layout process by 10x (or more) in the near future.  With this massive reduction of layout duration, what changes and challenges we can forecast in terms of project planning, workflow, and execution?   

We intend to discuss the following topics:

  • Early design stages layout studies methodology & process
  • Late layout engagement pro/con 
  • Concurrent or turnkey design model - Project planning
  • Schematics & Constraints Readiness: importance of high-quality constraints 
  • Tool Process flow (schematics, layout, constraints, simulation) 
  • How to qualify Layout quality
  • Future roles & responsibilities (DE/SIE/PIE/ME/LE)

Andy Kiang, Intel
Gary Kipp, Intel
Naveid M. Rahmatullah, Intel
Sam Dalrymple, Intel

Revolutionizing Design using Electronic Data Sheets

Problem Statement

With increasing complexity in system design, it is easier than ever to make simple mistakes that have significant cost and schedule implications.  One example is placement of a component that supports 1.8v on a 3.3v power rail. The board will work for some period before failing mysteriously.  Or the case where a pin is either misconfigured or not connected at all.  These mistakes result in expensive board spins and schedule impacts that can determine whether a product is successful.  With the right standards-based information, however, it is possible to easily integrate new components into designs and catch mistakes early.  

Intel created many tools to help our customers find and resolve issues faster.  Configuring the tools with design-specific data was tedious, however, and negated any efficiencies.  In many cases, manual, error-prone design checks were faster than configuring or populating the tools with data needed to verify correctness using automation.   Proprietary and non-standard data formats made delivering a cohesive suite of tools and capabilities difficult.

Solution

Working with Cadence, Google, Independent Hardware Vendors (IHV), we developed a standardized Digital Datasheet for the PC platform that can express electrical, configuration and even physical characteristics of devices, such as SoC's & embedded controllers, and enables new levels of automation.  This results in fewer mistakes and higher-quality designs by automating formerly manual checks.    In addition to describing how a component should be connected or configured, the Digital Datasheet can also be used to automatically generate source code to drive the component, saving weeks of development time.  The standard is publicly available under an Apache 2.0 license, and there is open-source tooling available to facilitate the creation of specification-compliant Digital Datasheets.   In this presentation we will demonstrate important usages and discuss roadmap.

Intel and Cadence are working closely to incorporate the Digital Datasheet into market-leading design tools to simplify integration and verification tasks of components.   Cadence has delivered an innovative approach to modernizing the flow by integrating components and validating design requirements.

Future

Beyond the 1.0 release, the industry working group that is defining the standard will be working to harmonize with other existing standards such as IPC-2581, automation of rules, and eventually bringing Cadence Optimality tool to drive rule-based design via SI/PI models. Cadence EDD tool also enables intelligent design where the reference design itself is part of the digital datasheet with seamless link to critical analysis tools such as MTBF for reliability analysis and early thermal analysis based on floorplan.

Randy Hall, Intel
Albert Sutono, Cadence

Shifting Left: Applying Sustainability Practices During PCB Design

Life Cycle Assessment (LCA) is a scientific process to assess the environmental impact of products throughout its lifetime. A complete LCA includes the environmental impact associated with a product's raw material extraction and procurement, manufacturing, transportation, consumer use, and disposal/recycling. The product's embodied carbon, also known as the product carbon footprint (PCF), is a subset of LCA that focuses on quantifying the total carbon in the physical product originating from its design and parts selection.

Sustainability practitioners today calculate the PCF after the design is complete, a process relying on manual greenhouse gas (GHG) data collection, calculation, and reporting. This limited connection between design iteration and PCF assessment hides opportunities to influence design changes that reduce carbon emissions in the final product. In order to meet industry decarbonization goals and regulatory requirements, a new approach is needed to enable streamlined PCF to measure carbon impact during design. By shifting sustainable practices earlier in the design cycle, aligned with emerging  IPC and ISO sustainability standards, engineers can evaluate carbon impact, cost, and performance simultaneously.

In this presentation, we will provide an industry perspective of how PCF is implemented today, and the how digital technologies can transform electronics industry sustainability practices in the future. Our talk will demonstrate how technology plays a roles to support data transparency, confidentiality, and repeatability. We will discuss areas where new collaborations are needed to support data accessibly to help designers create the next-generation PCB assemblies.

Marco Masciola, AWS

Signoff

Intel Standard Cells Low Voltage Characterization Enablement by Cadence Liberate Trio Characterization Solution

While Moore’s law continues to scale and improve transistors, it also leads to increasing power density with each generation of technology. To help reduce chip power density, aggressive voltage scaling to lower values, with transistors operating at near-threshold voltage (NTV) is applied. This brings unique challenges to the standard cell design and characterization due to large process variation and high noise sensitivity of transistors in NTV region [1,2]. 

To support low-voltage (LV) standard cell characterization and overcome the unique challenges, Intel Foundry has been working with Cadence to develop standard cell library LV characterization flow for Intel 18A Gate-All-Around (GAA) process. This flow includes: 1. Liberate Trio, a unified library characterization system. It is a multi-PVT characterization flow which is leveraged to streamline characterization process.  2. Unification of statistical and nominal modeling. In the single unified characterization flow, Liberty Variation Format (LVF) and nominal libraries generate in a single session. 3. 10,000-CPU-scaled flow with BOLT job management system, which manages and assures that all resources provided on the farm are utilized efficiently.  As a result of this collaboration, several flow enhancements were done, such as, NTV setting, advanced simulation mode settings and machine learning (ML) based options for LVF characterization, Brute force Monte Carlo (BMC) tool settings update and improved settings of BOLT-server-based runs, which reduced the simulation failure and run time, and/or enhanced data correlation.

As part of the flow development, several quality assurance (QA) checks are enabled to ensure delivery of liberty models with high quality. Liberty model structure comparison, data range check, monotonicity, check_LVF etc. are deployed. We also developed spice to flow-generated liberty model correlation checks, which include comparison of cell-level timing arc values (delay, transition time, constraint arcs, variation sigma) and path-level timing and power.

Shravya Gottipati, Intel
Dan Design Shi, Intel

Advanced Local Layout Effect (LLE) Aware Design and Optimization

Local Layout effects are often neglected or approximated in the design and optimization flow.

In lower technology nodes, such as 3nm and below, these effects can lead to discrepancies between intended and actual silicon performance.

This presentation will introduce a new technology: ""Local Layout Effect (LLE) Aware Design and Optimization""

that enables the consideration of manufacturing impacts associated with nanosheet devices during the design and optimization process.

The proposed technology developed in collaboration with Cadence integrates changes in the tools and flow from library characterization to signoff.

This presentation will demonstrate the effectiveness of our approach in silicon design for ultra-low technology nodes.

Edson Gomersall, Cadence

How to Signoff a Two Billion Cell Design with Tempus

Advanced semiconductor applications such as AI and graphics are fully leveraging dense advanced node technology to push the extreme limits of design size. To signoff such large designs, engineers are increasingly relying on distributed compute methods to accelerate the analysis. In this presentation, Cadence will present the latest capabilities of Tempus Distributed STA signoff and related closure methodologies including Certus.

Brandon Bautz, Cadence

Mastering the Timing Closure Maze for Enhanced Productivity and Efficiency with Certus

The Chip Design Industry continues to face incredible pressures to deliver higher performance in a smaller area, with lower power demands. From high-performance systems-on-chip for 5G mobile devices and network infrastructure to the radio-frequency transceivers that enable autonomous vehicles and the industrial Internet of Things, today’s applications demand a reduce Size, paired with Lowest power Consumption. This “downsizing” trend has affected every part of the system, but especially the integrated circuits (ICs) that represent the foundational building blocks.

To create Differentiating Implementation, Chip Design Companies want to Pack More Functionality on Chip with Much Boosted Frequency Target.  To maintain market leadership there is push to get Working Silicon on Time. This pushes release deadlines leaving challenging timelines for taking designs from cradle to finish. For complex and high frequency designs, there are many RTL architectural changes till late in the design cycle, leaving very little runway for timing closure and signoff cycles.

In advanced tech nodes, process performance is challenged by variation. To ensure reliable operation of design for target performance, it necessitates timing closure across huge set of scenarios. It is impractical to implement design in PnR flow for all the expanded scenario set, hence PnR implementation will be on limited subset of dominant scenarios for faster TurnAroundTime (TAT). Performance & power target on the expanded scenarios are guaranteed through ECO flows, and hence ECO flow plays a significant role in design timing closure.  

Current ECO solution needs significant manual handoffs between the flows, and hence creates a longer ECO feedback loop limiting number of ECO loops per week. This impacts design execution timeline and hence products time to market.

Below are the flow steps post implementing,

  • Constraint generation from constraint management tool for the chip level timing 
  • Physical aware ECO to generation of change list (ECO) for each partition/block.
  • Individual partitions consume the change list and do incremental eco place and eco route, metal fill.

This presentation will cover our use model of CDNS Certus, a single eco cockpit which is Highly Distributed, Efficient Flow for generation & implementation of timing ECO to help achieve Timing Closure on a Hierarchical Design with a faster TAT. Using Certus, we observed a 5X productivity with concurrent chip-level optimization and signoff closure compared to traditional ECO flow.

We will discuss the challenges and solutions for meeting Signoff QOR using Certus closure solution. We will also compare the turn-around time benefits, QOR metrics with respect to traditional signoff flow.

With Certus ECO closure we observe improved Chip level ECO/Timing predictability and turnaround during convergence cycle.

Munish Muneeswaran, Intel

Pegasus Signoff on the Cloud with SPOT Instances for Cost-effective Accelerated TAT

Tuple Technologies, providing the compute infrastructure on cloud to one of the leading bio-pharma companies, is transitioning the customer to Pegasus on Cloud for their designs for Physical verification signoff. Customer has been running their compute workloads on Cloud which provides instant access to thousands of CPUs that are managed by tuple technologies that can provide cost optimized Cloud Instances. Tuple along with Customer has successfully evaluated Pegasus on the Cloud using SPOT Instances that helped them reduce their compute cost. Pegasus FlexCompute contributes to 20-40% higher CPU utilization, without the need for the designer to predict how many CPUs are needed for the best run-time. Overall, a combination of FlexCompute and Cloud SPOT Instances reduce the compute cost by >80% and achieves the fastest TAT with massively parallel Pegasus engine and its Flexible Elastic Cloud Compute architecture. TrueCloud to protect IP and Data Security is another important feature evaluated.

Vamshi Kothur, Tuple Tech

Socionext STA/Timing ECO Solution for Over Billion Gates Full-Chip Designs in Advanced Technology Nodes

Full-chip level STA/Timing ECO is necessary to verify timing for both internal block and inter-block timing paths.

For over billion gate full-chip designs in advanced technology nodes, such as 3nm and 5nm, STA/Timing ECO job runtime and large memory consumption are big impacts on our productivity. In order to overcome these big impacts, we have adopted Cadence Tempus system, like DSTA, Certus, SmartScope, and Timing Context Analysis flow, collaborating with Cadence. And achieved 2x productivity improvement in our STA/Timing ECO flow.

In this presentation, we will discuss total STA/Timing ECO total flow solution to verify and close timing violations efficiently for both internal block and inter-block timing paths.

Nakamura Akihiro, Socionext

Verification

Accelerating Chip and System Development

Hardware based verification systems are core of high performance verification flows. Our latest generation, Palladium Z3 and Protium X2, the Dynamic Duo III, are the pinnacle of over 35 years of innovation with record breaking capacity, speed, and throughput providing unmatched compile and debug productivity for today’s cutting edge designs. Recent innovations in power analysis, 4 state emulation, digital mixed signal, and functional safety testing extend the systems with capabilities directly relevant to today’s Mobile, Automotive, Hyperscale and AI needs. Capping this off is the Palladium / Protium Cloud offering an unprecedented flexibility of business model to a traditionally capital intensive domain.

Michael Young, Cadence

Application of Cadence Virtual Bridge Emulation and SimAccel in Meta's Next-Generation Training and Inference ASIC Design

With modern training and inference ASIC designs becoming increasingly larger, validating these designs presents significant challenges. At Meta, we have traditionally employed Cadence SpeedBridge-based ICE emulation flow for design validation. In the development of MTIA (Meta Training Inference Accelerator), we sought to shift left for design verification and engage in early prototyping by adopting novel emulation techniques such as Virtual Bridge and SimAccel. These methods complement the existing SpeedBridge-based ICE solution, resulting in a comprehensive verification suite for Meta's DV, firmware, and SW teams. This presentation will demonstrate how Virtual Bridge and SimAccel works within Meta's emulation environment and spotlight the advantages they offer.

Lei Gao, Meta
Aakash Verma, Meta

Jasper CDC : Leveraging Formal to Go Beyond Structural CDC/RDC Checks

Clock Domain Crossing (CDC) verification stands as a pivotal step in the VLSI design process, addressing the challenges posed by multiple clock domains within the integrated circuit. With current methodology i.e., static checks, user needs to review all the constraints and waivers regularly so that they never go out of date with changes in RTL. With existing flow only structural checks were targeted.

This work delves into CDC Verification utilizing Jasper CDC App, which extends beyond structural checks exploring functional checks which helps to automate the checking of constraints and conditions for waivers formally and metastability checks which introduces metastability in our RTL. This helps to catch functional issues due to CDC early thereby reducing errors in GLS. 

We have implemented Jasper CDC flow on a subsystem level block which helped to improve our confidence over the waivers and make sure our constraints never go out of date thereby reducing efforts on extended reviews. The metastability checks introduces metastability in our formal test bench modelling both setup and hold scenarios to match silicon behavior. This will help to increase faith that design is intolerant to metastability effects leading to proper CDC signoff of RTL.

Vishal Jain, NXP Semiconductor

Memory Subsystem Level Protocol Compliance Checks

This presentation talks about the importance of the higher memory sub system level verification needs for protocol compliance of recent generation of memory sub systems using DDR like DDR5, Lpddr5 and how Cadence verification IP memory model team has come up/implemented a generic solution to describe such interconnect hierarchy in a modular and simple way. This approach defines a feature, associated grammar to capture memory sub system and implementation of handshake mechanism with triggers (like commands) to enhance individual instance DRAM model to be able to get visibility into other DRAM devices present in the design that are sharing resources like data bus, ZQ registers etc. Presentation also gives example of how this innovative solution has been used by Intel memory controller IP to enhance their sub system level verification to the next level while verifying protocol compliance for JEDEC define specification for multi-rank memory sub systems for DDR5 and Lpddr5 based designs.

Harish Lalithkumar, Intel
Shyam Sharma, Cadence

Smarter Verification Through Real-Time Multi-Tool Integration

With ever-growing device size and complexity, SoC verification has become an extremely challenging task. The verification effort can often climb to more than 500 years of compute time – with tens of millions of runs and hundreds of millions of coverage bins, to uncover thousands of bugs. Debugging alone can consume multiple weeks of time of many engineers and is prone to errors. In terms of time-to-market, therefore, verification can be considered a key limiting factor and a potential cause of missed product releases. 

Reconciling a thorough verification coverage with a tight SoC development schedule clearly calls for closer collaboration and better productivity through better integration and automation, an even more challenging goal. Verisium Manager, part of the Verisium AI-Drive Verification Platform’ offers connection to requirements management systems and enables the following :

  • Complete traceability  
  • Richer Collaboration  
  • Reduced risks, Better Compliance   
  • Improved agility, faster cycle times  
  • Improved quality and productivity

Sandeep Jain, OpsHub, Inc.
Sieu Chau, Cadence

Verification Efficiency Improvement and Bug Hunting Using Verisium Sim AI

For the block designs that have multiple modes of operation and randomness, achieving maximum verification coverage with minimum number of tests is a complex problem. In-order to achieve 100% coverage for these designs, multiple randomized regression runs needed with different modes of operation, which demand more verification resources, LSF slots, regression run and triage time. As HW designs are becoming very complex day by day, finding bugs early in design cycle and achieving the desired coverage with efficiency is becoming challenge. To adapt to these complex designs, verification teams need tools that can improve productivity by increasing efficiency of coverage closure, expose bugs early in design cycle and provide analytics about how existing testcases are impacting coverage. 

To solve this problem machine learning can be applied by correlating random variable to coverage, learn about randomness in current regression suite and how the distribution of random variables are impacting coverage, failures. 

This presentation describes how Qualcomm is partnering with cadence by incorporating Verisium Sim AI in the verification flow and achieving regression compression, coverage maximization and bug hunting in the complex 5G designs.

Tejoram Movva, Qualcomm
Michael Jankauski, Cadence