• The CAD Contest at ICCAD (https://iccad-contest.org/) is a challenging, multi-month, research & development competition, focusing on advanced, real-world problems in the field of Electronic Design Automation (EDA). Contestants can participate in one or more problems provided by the EDA/IC industry. The winners will be awarded at an ICCAD special session dedicated to this contest. Since 2012, the CAD Contest at ICCAD has been attracting more than a hundred teams per year, fostering productive industry-academia collaborations, and leading to hundreds of publications in top-tier conferences and journals. The contest keeps enhancing its impact and boosts EDA research.

  • Microfluidic devices, also known as Labs-on-a-Chip (LOCs), offer a convenient and cost-effective solution for high-throughput biochemical and medical experiments on miniaturized platforms. Recent advancements in the field have reached a new level of sophistication, including innovative diagnostic assays driven by the COVID-19 pandemic, organ-on-chip systems that mimic the physiological functions of human bodies, new manufacturing possibilities, and the emergence of ISO standards. These developments have led to more powerful microfluidic devices but, at the same time, present significant challenges for designers: precise dimensioning, placing, and routing of components and channels, accurate injection of samples and chemicals, as well as timely initiation of processes such as mixing, heating, or incubation. 

  • Traditional hardware-software (HW-SW) co-design methods, well established through languages like C/C++, SystemC, and tools for performance analysis and co-simulation, face significant challenges in meeting the demands of AI systems, where hardware and software are more tightly intertwined than ever. To fully leverage the potential of AI, a unified AI co-design loop is needed that integrates both hardware and software perspectives, optimizing for performance, power, and reliability. AI-focused co-design holds the promise of significant advancements, especially in the context of embedded AI systems in resource-limited devices. By tailoring hardware for specific AI tasks and refining AI algorithms to fit hardware constraints, co-design can drive energy efficiency far beyond what is achievable with conventional GPUs, potentially leading to orders of magnitude higher efficiency compared to current AI systems. 

  • In recent years, many GPU-accelerated EDA algorithms have been introduced, demonstrating runtime speedups of hundreds to thousands of times for specific stages in the EDA flow. Beyond runtime improvements, this special session will explore various GPU adoption models in the EDA flow that also enhance design quality, particularly in terms of power and area reduction. The session will feature a total of five talks. Three of these are invited presentations from industry leaders—Cadence, Synopsys, and Siemens—who will showcase their production-ready, GPU-based EDA solutions. The remaining two talks are from academic research groups, offering insights into emerging GPU adoption models with the potential to further improve EDA design quality.

  • Chip technology is a foundational enabler across diverse sectors such as automotive, telecommunications, finance, and healthcare. Continued progress in computing performance, efficiency, and functionality has been driven by advances in technology scaling and architecture. However, five major challenges—scaling, memory, power, sustainability, and cost—are now limiting the effectiveness of traditional CMOS design methods at advanced nodes. These limitations demand a fundamental shift in design and technology approaches to meet the increasing performance and efficiency requirements of future applications. This session introduces CMOS 2.0, a new paradigm driven by system technology co-optimization (STCO), where technology and system design are developed in tandem. CMOS 2.0 breaks with the legacy of monolithic scaling by enabling heterogeneous integration through 2.5D and 3D technologies, including chiplet architectures, wafer backside processing, hybrid bonding, and sequential 3D integration. These innovations allow system designers to assign different chip functions—logic, memory, power delivery—into specialized stacked layers, optimizing for performance, energy, and area across a wide variety of workloads. To fully realize the benefits of CMOS 2.0, a corresponding revolution in Electronic Design Automation (EDA) is required. This session will explore the emerging methodologies and tools needed to exploit the expanded design space offered by advanced heterogeneous integration and STCO. It will bring together leading experts to discuss challenges and opportunities in reshaping EDA for CMOS 2.0, offering ICCAD attendees deep insights and practical strategies to drive innovation in the next era of semiconductor design.

  • Traditionally, massive sensor data are continuously collected to support external processing like Artificial Intelligence (AI)-based analysis and prediction, which consumes significant power and introduces delays. To address this limitation, we need a sensing system that is power-efficient, compact, and capable of processing large amounts of data in real time without sacrificing accuracy. One promising direction is to enable sensing units to offer additional computing capability than a simple sensor. In-Sensor Computing (ISC) emerges as a new computation paradigm to address the increasing concern on latency and energy consumption in sensory data transmission, analog-to-digital conversion (ADC), and data pre-processing. Existing literature for ISC mainly investigates the materials and device structure to implement desired functions, pursue better performance, and examine the feasible integration technologies. While offering significant benefits in performance improvement and power saving, in-sensor computing could also bring new security challenges. Unfortunately, limited work is available to study new threats and unique attack surfaces for ISC systems. This special session fills this gap.

  • Compared to software design, hardware design is more expensive and time-consuming. The software development flow is agile partly because software engineers can leverage a rich set of open source tools such as runtime, middleware, etc., to get projects started and iterated easily and quickly. On the hardware side, the agile and open hardware ecosystem is rising thanks to the open ISA RISC-V and other emerging open EDA and IP tools. This not only saves costs, widens the design scope of hardware, but also provides a great opportunity for hardware innovation and creativity by designing and verifying custom hardware for different needs. This special session is formulated to be as broadly interesting and useful as possible to students, researchers and faculty, and to practice engineering in both EDA and hardware design area. This session is 2 hours, consisting of five talks. The first talk presents a machine learning accelerator design as RISC-V extension using an open source flow. The second talk develops an open CGRA framework. The third talk presents an FPGA-based open source RISC-V emulation platform. The fourth talk presents an agile hardware testing framework based on reinforcement learning. The last talk introduces an open-Source ASIP design framework.

  • Large language models (LLMs) have demonstrated significant potential for enhancing electronic design automation (EDA), particularly in design interpretation and automation workflows. However, practical deployment remains hindered by fundamental data-related obstacles, notably the scarcity of high-quality and standardized datasets. Unlike other domains benefiting from widely available datasets, EDA data are typically fragmented across diverse tools and abstraction layers. Furthermore, synthetic datasets, while widely used, frequently fail to represent the complexities in realistic hardware design, limiting model reliability and generalization. This session brings together leading experts from both academia and industry to address pressing data challenges across the EDA workflow—from RTL to layout, and from design to manufacturing. The goal is to develop robust data generation methods, establish reliable data infrastructures, and curate representative datasets that reflect the complexity of real-world hardware designs. These efforts are critical to building scalable and trustworthy LLM applications that can support next-generation intelligent EDA tools and tackle the growing complexity of modern hardware systems.

  • As emerging computing paradigms push beyond the limitations of traditional CMOS-based computing using Von Neumann architectures, there is a growing need to rethink and extend Electronic Design Automation (EDA) methodologies to support their unique characteristics. These paradigms—including application-specific approximate computing, reconfigurable and secure logic using Reconfigurable Field-Effect Transistors (RFETs), in-memory computing with non-volatile memory technologies, and photonic analog wavefront computing—represent diverse and promising directions beyond conventional digital design. Collectively, they offer transformative potential for achieving significant improvements in energy efficiency, computational speed, and architectural scalability. For example, application-specific approximate computing enables the design of custom arithmetic circuits that exploit application-level error resilience, allowing for optimized accuracy–power–performance–area (PPA) trade-offs in error-tolerant applications. 

  • As the chip manufacturing node approaches the diameter of silicon atoms, reduction in transistor size becomes unsustainable, faced with severe challenges in yield, power consumption, performance, and cost. Integrating chiplets into one package has emerged as a promising solution to further increase the computing capabilities of chips by a few orders of magnitude. It relies on mature manufacturing processes, hence gaining significant advantages in yield protection and agile design without tape-out, exploits heterogeneity of chiplets to support customized design for different application requirements, and possesses ease of use and power savings through advanced packaging.

    With the interconnection and packaging technologies being heavily invested to build the physical foundation, the focus is moving upwards to the system level, which determines whether the heterogeneous chiplets can cooperate efficiently without any bottlenecks in resource access. It involves architecture exploration and resource management at the chiplet level, which are both new research directions interacting with each other, while the design of individual chiplets will have converged.

  • This special session presents a full-stack perspective on AI-driven technologies for healthcare and wellbeing, spanning from hardware innovation and AI model optimization to intelligent orchestration across edge–fog–cloud systems. It opens with the development of Edge Language Models (ELMs), a new class of compressed LLMs designed for real-time interpretation of radiology findings on edge devices, addressing the pressing need for intelligent, low-latency clinical support at the point of care. Enabling such models to function reliably in dynamic, resource-constrained environments requires system-level intelligence—addressed in the second talk through the Mindful AI framework, which dynamically balances accuracy, energy, and latency to ensure robust and personalized operation in pervasive health applications. However, deploying these intelligent systems in real-world, body-integrated scenarios also demands hardware that goes beyond conventional silicon. The third talk addresses this by presenting an end-to-end design framework for flexible healthcare wearables, offering biocompatible, sustainable, and conformable solutions suited for extreme-edge applications such as smart patches and implantables. Bringing the stack full circle, the final talk introduces a multi-agent, LLM-driven EDA flow that automates the design of flexible and printed electronics—empowering non-experts to design custom hardware. Together, these talks illustrate how emerging technologies such as large language models and flexible electronics can be cohesively integrated to create intelligent, sustainable, and human-centered healthcare solutions—perfectly aligned with ICCAD's mission to advance cross-disciplinary design automation.

  • The last two years have witnessed the rapid evolution of large language models (LLMs). While data serves as the foundation for such evolution, it also faces ever-increasing privacy risks, including data breaches, leakage, misuse, etc, as existing LLM services on the cloud heavily rely on the availability of plaintext prompts. Privacy-preserving deep learning (PPDL) has thus been proposed and has recently attracted increasing attention from both industry and academia. By leveraging cryptographic primitives like homomorphic encryption (HE), multi-party computation (MPC), zero-knowledge proof (ZKP), etc, PPDL achieves a formal privacy guarantee during DL computation. However, PPDL usually suffers from orders of magnitude latency overhead due to high computation and communication costs, hindering its practical usage in real-world applications. This special session presents a full stack of techniques across the algorithm, protocol, and hardware levels to narrow the efficiency gap of PPDL. For this purpose, we have invited esteemed experts from the University of California San Diego (USCD), New York University (NYU), North Carolina State University (NCSU), Peking University (PKU), and Tsinghua University (THU) to join the special session. The session will present four in-depth talks on efficient PPDL: UCSD and THU will focus on the hardware/algorithm co-design, while NYU and NCSU will focus on the co-optimization of the LLM algorithms and PPDL protocols. By fostering an interdisciplinary dialogue, this session not only presents to the audience the recent progress of PPDL but also encourages discussion on the challenges and opportunities of PPDL.

  • Analog/radio-frequency (RF) integrated circuit (IC) design automation has been a longstanding challenge. Recent breakthroughs in generative artificial intelligence (AI) offer transformative opportunities to tackle complex, large-scale analog/RF IC design tasks beyond traditional methods. This special session gathers leading researchers to showcase cutting-edge advancements, from highly efficient inverse design methods to innovative discoveries of unconventional RF passive devices and active analog topologies. These timely developments will set the stage for scalable and extensive use of generative AI in analog/RF IC design, broadly benefiting researchers and engineers in analog/RF IC design, electronic design automation (EDA), and AI/ML communities.

  • Neuromorphic computing, by mimicking the brain behavior, promises to provide highly energy efficient processing of large amounts of data. These systems can have profound impact in Scientific Computing applications, being them sensing applications that require low latency processing or modeling and simulation. In this special session, we plan to discuss the challenges, needs, and opportunities to realize the necessary co-design stack that will make neuromorphic computing effective for scientific problems. We will discuss exemplar scientific computing applications, promising neuromorphic primitives that could enable mapping such broad set of applications, the analog computing aspects underpinning the realization of such neuromorphic primitives, and highlight the necessity of testbed to drive the development of such systems.