Presumably each chip can be configured to send or receive audio directly over that connection instead of forming it into packets and sending over USB/SDIO - which can reduce latency and possibly save some power.
I also recalled that in some other phones where the modem and Bluetooth chip have direct audio connections to each other - allowing the CPU to remain off while doing a phone call with a Bluetooth headset! Then I wondered how an operating system would have to be designed to allow this kind of feature to be used effectively.
---
Suppose I'm writing a generic application that makes phone calls, on some hypothetical operating system that runs on many different phones. By default I'll write something equivalent to
Code: Select all
cat < /dev/bluetooth/headset0/microphone > /dev/phonecall &
cat < /dev/phonecall > /dev/bluetooth/headset0/speaker
So what kind of interface would an OS have to have, to solve this kind of problem? I suspect something like a dataflow graph API: the app would have to do something like
Code: Select all
find_node N1 phonecall_input
find_node N2 bluetooth/headset0/speaker
link_node N1 N2
find_node N3 bluetooth/headset0/microphone
find_node N4 phonecall_output
link_node N3 N4
commit_graph
sleep forever
Now the CPU isn't even pretending to see the bits. The OS/driver system is able to process this flow graph and (via some magic algorithm) find the most efficient way to implement it. If the data can be passed directly then it does so; if zero-copy DMA is available it uses that; if not, then it starts a loop to read and write data. The API works the same way on every device, although the performance can differ and even the quality of the data stream may differ.
If you do it this way you have a new problem: what if some processing is supposed to happen in the middle? I can't actually think of a case where an app would want to inject processing into a phone call audio, other than perhaps background noise reduction, which you'd want to be enabled all the time in every scenario. But maybe there is one. Then what if the hardware supports fast-path processing that isn't quite identical to what the application asks for? How do you communicate to the application what it can ask for and get?
---
Similar scenarios also apply to video data (some chips can forward camera data to the screen? probably? or at least from the camera to the GPU) and network traffic (there are definitely multi-port NICs with built-in switches), and giant supercomputers (that shuffle data between NIC and GPU although this one is probably just a plain old cross-device DMA).
Any interesting thoughts? This is just nerd sniping, not something I really need help with. There's probably no good answer.