Company Logo



20 Innovative Companies of the Year 2024

Abacus Semiconductor Corporation – Revolutionizing Supercomputing and AI with Innovative Processor Solutions for Enhanced Performance, Scalability, and Business Efficiency

Abacus Semiconductor Corporation – Revolutionizing Supercomputing and AI with Innovative Processor Solutions for Enhanced Performance, Scalability, and Business Efficiency

The landscape of high-performance computing (HPC) and artificial intelligence (AI) is undergoing a transformative shift, driven by innovations that promise to redefine the boundaries of computational capabilities. As the demand for faster and more efficient computing solutions continues to grow, companies are pushing the envelope with groundbreaking technologies that address the limitations of existing architectures.

Leading the charge in this exciting era of innovation is Abacus Semiconductor Corporation, a fabless semiconductor company specializing in the design and engineering of advanced processors, accelerators, and smart multi-homed memories. With a focus on pushing the boundaries of what's possible in supercomputing and AI applications, including large language models like GPT-4, Abacus is setting new standards for performance, efficiency, and scalability.

In an exclusive interview with CIO Bulletin, Axel K. Kloth, CEO of Abacus Semiconductor Corporation, shed light on how his company is paving the way for a brighter and more connected future through its groundbreaking solutions.

Interview Highlights

Q. Could you share the story behind the inception of Abacus Semiconductor Corporation? What inspired the founding team to establish the company and enter the semiconductor industry?

I started Abacus Semiconductor Corporation because I was frustrated with the status quo in supercomputers and in the computers used in the corporate and internet backend that create the foundational models for AI. Today’s solutions are largely based on technology that has been around for decades, and many assumptions are simply no longer correct. While the growth of computational performance has been astounding, we believe that the challenges and the areas of deployment have grown even faster. A smartphone today has higher computational performance than a supercomputer in 1990, at less than 1/1,000 of the cost and less than 1/1,000,000 of the power consumption.

However, we face computational challenges today that have grown by more than a factor of 1,000,000, and as such, we need different solutions. In other words, no matter how many processor cores we can cram onto a single die (the silicon piece that contains the processor), it will not be enough, and we need to be able to connect as many processor and accelerator cores as needed to solve a computational challenge in a reasonable amount of time. While the processor core is important, most computation today is actually executed within accelerators, and therefore, connecting the processor cores and the accelerator cores and the smart memories together is the more important solution to the growth problem.

Q. What are the flagship products and services offered by Abacus Semiconductor Corporation, and how do they cater to the needs of clients in supercomputing, AI/ML, and HPC domains?

We have developed multiple products to build all of the systems that constitute an energy-efficient supercomputer and AI computer. The first one is our Server-on-a-Chip. It can act as the centerpiece of a storage appliance or a general-purpose server and connects network input and output (I/O) to mass storage I/O while allowing for the collection of metadata and heuristics. In other words, it has the smarts to act as a firewall for network traffic, connects to mass storage, and lets the user know when it is time to buy more disks (solid-state disks or SSDs, or hard disks). Additionally, it notifies the user when mass storage devices start to fail and need to be replaced. This supports very high levels of I/O performance, allowing the supercomputers connected to it to focus on computation rather than waiting for network or mass storage I/O.

The second part is our smart multi-homed memory, which can be used to share data across processors and accelerators, complementing our Server-on-a-Chip. This is analogous to how an efficient office operates. Someone collects all tasks from the outside, another person determines the process to finish the task, someone else organizes it all in a well-defined order, and yet another person works on and completes the task. When everything is completed, the first person takes the results and hands them back to the person who requested the task to be worked on and completed.

Our third product is our Math Processor, which is focused on executing high-level math functions, especially those needed for traditional High-Performance Computing (HPC) and for generating AI models. This involves vector math, matrix math, tensor math, and a class of operations called transforms. It utilizes our smart multi-homed memory for all input, output, and transient data, and our Server-on-a-Chip for network and mass storage I/O.

Q. How does Abacus Semiconductor Corporation's beyond-von-Neumann and beyond-Harvard CPU architecture address the performance limitations commonly experienced in supercomputing and AI/ML applications?

Today’s computers are all based on what is called the von-Neumann architecture. It describes a processor that has an input to receive input data and instructions, and an output port to output the results from processing the input data. While that is theoretically a complete description of how any computational element can and should work, it does not reflect reality.

Intermediate results need to be stored somewhere, and sometimes, the flow of input data is not consistent with the ability to process data. As such, some input data needs to be stored somewhere in order for it to be available for processing at a time when the processor has the capacity to do so. A similar problem arises on output – sometimes, the processor cannot output its data because the recipient does not accept it. As a result, output data may have to be stored somewhere so that it does not need to be discarded, which would obviously waste time and energy.

To store this data, a von-Neumann computer adds a memory interface, which is used to store input data, output data, and all intermediate data as well as instructions that are scheduled to be executed. A processor that follows the Harvard architecture principle adds a second memory interface for access to instructions, simply because the access pattern of a processor to instructions differs vastly from the access patterns to all other data. Neither processor architecture allows for a simple high-bandwidth and low-latency connection to other processors or cores, and that is what our patent-pending technology enables. As such, we solve the memory bandwidth problem in conjunction with our smart multi-homed memory, and the inter-processor communication (IPC) problem between processors and accelerators.

Q. Could you elaborate on the key features of Abacus Semiconductor Corporation's Server-on-a-Chip and how it differs from traditional server processor solutions?

Traditional processors follow a design principle in which all processor cores are identical, and each one of them can execute any task. This design is known as Symmetric Multi-Processing (SMP). SMP allows for a high degree of flexibility and offers many advantages. However, with die sizes increasing and transistor densities reaching enormous levels—in today's modern semiconductor manufacturing processes, we can easily place 100 billion transistors onto a single die—the number of cores on a processor die presents challenges. Specifically, I/O becomes limited, and many cores may be idle a large percentage of the time. Consequently, extensive logic is required to put processor cores to sleep and wake them up as needed, adding complexity to the system.

The current solution favored by legacy providers is to use two different types of cores: efficiency cores and performance cores. The idea is that if the load is low, the efficiency cores handle the tasks to keep power consumption low. When demand increases, the performance cores are activated to take over the compute tasks. However, transitioning between cores requires moving data, instructions, and context, which consumes time and energy. Additionally, some instructions may not be present in efficiency cores, necessitating emulation or task switching to a performance core.

We believe that this approach is not an efficient use of transistors. At Abacus Semiconductor Corporation, we separate the functional units of a processor, designating specific cores in conjunction with hardware accelerators for network I/O, mass storage I/O, and application execution. In traditional processors based on the SMP philosophy, when a core executing an application program encounters a network or mass storage I/O function, it must handle that function itself. Given that both network I/O and mass storage I/O are considerably slower than the processor cores' cycle time, the core either waits or switches tasks to avoid idling during each part of the I/O operation, which can involve multiple slow I/O operations.

In Abacus Semiconductor Corporation's Server-on-a-Chip, an application processor redirects all network I/O and mass storage I/O to the respective subsystems. This allows the processor to switch between tasks immediately and start or continue another task without being interrupted by slow I/O operations. The processor is notified when the I/O operation is completed and can then resume the task, reducing idle time and waiting.

Q. How does Abacus Semiconductor Corporation integrate its processors, accelerators, and smart memory for high-performance across AI/ML, HPC, and database management?

We believe that interfaces in use today are used contrary to their intended purposes, and that stretches their abilities beyond their design limits. For example, PCIe stands for Peripheral Component Interconnect Express. It was designed and intended to connect external devices such as SAS and SATA mass storage controllers to the CPU. It is also used for video output, with all graphics functions being executed on the graphics processing unit (GPU). It connects to USB for keyboards, mice, and portable mass storage devices. It was never intended to be used as an interconnect between a CPU core and a GPGPU accelerator core, or even for sharing memory. Its latency is too high to accomplish that. We have developed an internal and external interconnect that can connect processor cores, accelerator cores, and smart multi-homed memory with each other at very low levels of latency, very high bandwidths, on a very limited number of pins (or LGA pads). We made this interface universal so that all of those connections are possible.

Q. What future advancements is Abacus Semiconductor Corporation focusing on to enhance the performance, scalability, and versatility of its semiconductor solutions in computing and data processing?

We anticipate that the energy required for transferring a bit will continue to improve, whether through a short-reach electrical interface or medium-reach optical means. We have built and designed our interface so that the transport layer is independent of the logic layers, allowing us to use either one or a combination of both. We hope that the industry will join us on this journey, as we plan to make the interface and the logic available to anyone on a FRAND (Fair, Reasonable, and Non-Discriminatory) basis for licensing.

The Stalwart Leader Upfront

Axel K. Kloth, the founder, President, and CEO of Abacus Semiconductor Corporation, is a physicist and computer scientist with a deep expertise in high-performance computing (HPC) and artificial intelligence (AI). With hands-on experience in deploying and developing cutting-edge solutions, Axel has a keen understanding of what truly works in technology and market demands. As a serial entrepreneur, he exhibits a sharp instinct for innovation, driving the company's mission to redefine computational excellence.

“Interfaces today are often misused, pushed beyond their design limits. We've crafted an interconnect—internal and external—that unites processor cores, accelerators, and smart memory at minimal latency and maximum bandwidth, all on a compact pin count. Our universal design empowers seamless connectivity across the board.”


Business News


Recommended News



Most Featured Companies

ciobulletin-aatrix software.jpg ciobulletin-abbey research.jpg ciobulletin-anchin.jpg ciobulletin-croow.jpg ciobulletin-keystone employment group.jpg ciobulletin-opticwise.jpg ciobulletin-outstaffer.jpg ciobulletin-spotzer digital.jpg ciobulletin-virgin incentives.jpg ciobulletin-wool & water.jpg ciobulletin-archergrey.jpg ciobulletin-canon business process services.jpg ciobulletin-cellwine.jpg ciobulletin-digital commerce bank.jpg ciobulletin-epic golf club.jpg ciobulletin-frannexus.jpg ciobulletin-growth institute.jpg ciobulletin-implantica.jpg ciobulletin-kraftpal technologies.jpg ciobulletin-national retail solutions.jpg ciobulletin-pura.jpg ciobulletin-segra.jpg ciobulletin-the keith corporation.jpg ciobulletin-vivolor therapeutics inc.jpg ciobulletin-cox.jpg ciobulletin-lanner.jpg ciobulletin-neuro42.jpg ciobulletin-Susan Semmelmann Interiors.jpg ciobulletin-alpine distilling.jpg ciobulletin-association of black tax professionals.jpg ciobulletin-c2ro.jpg ciobulletin-envirotech vehicles inc.jpg ciobulletin-leafhouse financial.jpg ciobulletin-stormforge.jpg ciobulletin-tedco.jpg ciobulletin-transigma.jpg ciobulletin-retrain ai.jpg
ciobulletin-abacus semiconductor corporation.jpg ciobulletin-agape treatment center.jpg ciobulletin-cloud4wi.jpg ciobulletin-exponential ai.jpg ciobulletin-lexrock ai.jpg ciobulletin-otava.jpg ciobulletin-resecurity.jpg ciobulletin-suisse bank.jpg ciobulletin-wise digital partners.jpg ciobulletin-appranix.jpg ciobulletin-autoreimbursement.jpg ciobulletin-castle connolly.jpg ciobulletin-cgs.jpg ciobulletin-dth expeditors.jpg ciobulletin-form.jpg ciobulletin-geniova.jpg ciobulletin-hot spring it.jpg ciobulletin-kirkman.jpg ciobulletin-matrix applications.jpg ciobulletin-power hero.jpg ciobulletin-rittenhouse.jpg ciobulletin-stt logistics group.jpg ciobulletin-upstream works.jpg ciobulletin-x2engine.jpg ciobulletin-kastle.jpg ciobulletin-logix.jpg ciobulletin-preclinical safety (PCS) consultants ltd.jpg ciobulletin-xcastlabs.jpg ciobulletin-american battery solutions inc.jpg ciobulletin-book4time.jpg ciobulletin-d&l education solutions.jpg ciobulletin-good good natural sweeteners llc.jpg ciobulletin-sigmetrix.jpg ciobulletin-syncari.jpg ciobulletin-tier44 technologies.jpg ciobulletin-xaana.jpg

Latest Magazines

© 2024 CIO Bulletin Inc. All rights reserved.