The hardware - OS interaction
Posted: Mon Nov 02, 2020 3:56 pm
Back in the "old days" it was pretty easy to create a measurement system. You just picked some ADC with a parallel interface, your favorite CPU (like Z80 or 68x), built a PCB and wrote a bit of code, and there you had a functional system.
It even worked pretty well in the days of MS-DOS, execept that by then the hardware interface (like the ISA bus or PC104) was a bit more complicated. Still, it was possible to create these things without any fancy tools and since MS-DOS had no hardware abstraction layer you could use direct IO.
Today things are completely different. Today you can buy ICs that can sample at more than 1GHz using 14 or 16 bits, and these ICs are cheap. The problem is that they are surface-mounted and use complicated interfaces. If you look at ready-made systems they are closed designs and if they don't do what you want you cannot modify the code. Another problem is that today's PC interfaces are generally too slow to handle 4GB/s (2 channels, 2 bytes per sample). USB 3 doesn't work, and neither does SATA or a 10G network card. About the only thing that can handle this speed is the PCI express bus with enough lanes. The PCIe is a complex design that you will not likely interface with 74HC logic.
The rescue here is to use FPGAs. FPGAs can interface with the PCIe, and they can also clock the serial interface of high-speed ADCs. However, now the next problem appears. Since writing drivers for today's complicated operating systems is above the head for many hardware designers, they prefer to use C code in CPU emulators in the FPGA and interface their systems through Ethernet. This creates a single-device system and works fine if you don't need raw data, but rather can do signal processing in the FPGA. It also creates a set of problems. If there is not enough RAM on your FPGA then you can not save a lot of sample data, and it's impossible to stream the data over Ethernet, even if you have 10G Ethernet. The ideal solution would be to use an ordinary PC with lots of RAM, and access data like in the MS-DOS days. However, you cannot do it like that since you cannot interface with PCIe from an ordinary Windows (or Linux) application, and so you need a driver. When somebody writes the driver for you, they take away the possibility for you to decide how to handle the data. They add fancy signal analysis and spectral views (just like the FPGA-only solution does too from a web-page), but you typically won't be able to save all the data and access it as raw data.
In the end, in order to create an optimal solution you need an ADC evaluation board, a FPGA evaluation board with PCI express and your own operating system (or knowledge and tools to develop a driver for a standard OS). You need to be able to write code for the FPGA and a good driver for the OS. If you use your own operating system, you both have the required knowledge and also something that could be adapted to the task.
I'd say very few people have this knowledge, which is why we have single-chip FPGA solutions and a few closed designs where you only get an API that is likely too limited.
The barrier to get started also is extremely high. You need to learn Verilog, FPGA design tools, maybe even the C abstraction used in the FPGA, and to write operating system drivers. So, even if you could do stuff that has performance levels you wouldn't even think was possible 20 or so years ago, it's also many times more complicated.
There are other examples of this too. Today, a lot of microcontrollers can interface with USB, but they generally can only handle low speed, which is fine if you don't need a fast interface, but unsatifactory otherwise. To be able to use higher speed on USB, you need a complicated ARM processor (or other embedded 32-bit processor), and a incredibly complex design environment, and possibly an operating system (like Linux). So, with high-speed USB you are basically back to very complicated designs and knowledge of device-driver development, while you can create low-speed USB devices with only an assembler (or a C cross compiler).
It even worked pretty well in the days of MS-DOS, execept that by then the hardware interface (like the ISA bus or PC104) was a bit more complicated. Still, it was possible to create these things without any fancy tools and since MS-DOS had no hardware abstraction layer you could use direct IO.
Today things are completely different. Today you can buy ICs that can sample at more than 1GHz using 14 or 16 bits, and these ICs are cheap. The problem is that they are surface-mounted and use complicated interfaces. If you look at ready-made systems they are closed designs and if they don't do what you want you cannot modify the code. Another problem is that today's PC interfaces are generally too slow to handle 4GB/s (2 channels, 2 bytes per sample). USB 3 doesn't work, and neither does SATA or a 10G network card. About the only thing that can handle this speed is the PCI express bus with enough lanes. The PCIe is a complex design that you will not likely interface with 74HC logic.
The rescue here is to use FPGAs. FPGAs can interface with the PCIe, and they can also clock the serial interface of high-speed ADCs. However, now the next problem appears. Since writing drivers for today's complicated operating systems is above the head for many hardware designers, they prefer to use C code in CPU emulators in the FPGA and interface their systems through Ethernet. This creates a single-device system and works fine if you don't need raw data, but rather can do signal processing in the FPGA. It also creates a set of problems. If there is not enough RAM on your FPGA then you can not save a lot of sample data, and it's impossible to stream the data over Ethernet, even if you have 10G Ethernet. The ideal solution would be to use an ordinary PC with lots of RAM, and access data like in the MS-DOS days. However, you cannot do it like that since you cannot interface with PCIe from an ordinary Windows (or Linux) application, and so you need a driver. When somebody writes the driver for you, they take away the possibility for you to decide how to handle the data. They add fancy signal analysis and spectral views (just like the FPGA-only solution does too from a web-page), but you typically won't be able to save all the data and access it as raw data.
In the end, in order to create an optimal solution you need an ADC evaluation board, a FPGA evaluation board with PCI express and your own operating system (or knowledge and tools to develop a driver for a standard OS). You need to be able to write code for the FPGA and a good driver for the OS. If you use your own operating system, you both have the required knowledge and also something that could be adapted to the task.
I'd say very few people have this knowledge, which is why we have single-chip FPGA solutions and a few closed designs where you only get an API that is likely too limited.
The barrier to get started also is extremely high. You need to learn Verilog, FPGA design tools, maybe even the C abstraction used in the FPGA, and to write operating system drivers. So, even if you could do stuff that has performance levels you wouldn't even think was possible 20 or so years ago, it's also many times more complicated.
There are other examples of this too. Today, a lot of microcontrollers can interface with USB, but they generally can only handle low speed, which is fine if you don't need a fast interface, but unsatifactory otherwise. To be able to use higher speed on USB, you need a complicated ARM processor (or other embedded 32-bit processor), and a incredibly complex design environment, and possibly an operating system (like Linux). So, with high-speed USB you are basically back to very complicated designs and knowledge of device-driver development, while you can create low-speed USB devices with only an assembler (or a C cross compiler).