Page 1 of 1

character and block - best representation of devices?

Posted: Mon Jan 07, 2008 12:26 am
by AndrewAPrice
When the Unix device specification was first written, devices could either be accessed in two ways:

As a block of data: Which works fine for disk drivers, disk partitions, or anything that can be represented as a chunk of data.

As a stream of characters: Which suits text terminals, keyboards, and line printers.

I like this idea, since by using only a few sets of calls (read, write, seek) one may access any device in a consistent interface.

However, in a modern and more complex system (specifically in my micro-kernel environment), I do not see how this abstraction can suit every device.

For example, consider a graphics device. Using the Unix philosophy, this device can be represented as a block device, giving linear access to the frame buffer. But, in doing so, how would you handle things like flipping the buffer and setting the resolution?

The same problem could apply to a sound device. You could represent this using character device to constantly stream raw sound into the device. But how do you set the bit-rate, how do you change the volume, set if the data is a uncompressed, compressed, or MIDI stream?

There are many move examples of this, such as setting the burn speed and ejecting the tray of an optical drive.

The best possible way I have thought about getting around this is using multiple character and block device to represent one physical device. Using the example of a graphics device, you could use a block device to represent the frame buffer and a character device used to send special commands to the device. But, in doing so you are creating two virtual devices to represent one physical device.

The problem could become a lot more extreme. If a sound device supports 6 independent channels used for surround sound, each channel would need to be represented by a different character device. To control the volume and bit-rate of each channel, you would need another character device per channel. That equals 12 character devices representing one sound card! (Although there might be advantages of representing each channel individually, that is not the point of this argument).

I am looking for an driver-abstraction/interface where each physical device can be represented by a single virtual device though a consistent interface. Is this possible using the *nix character/block philosophy, or maybe using an yet-implemented 3rd device type? Or am I looking for a completely different paradigm?

Posted: Mon Jan 07, 2008 1:35 am
by xyzzy
Simple answer: ioctl. It lets you send extra requests to a device node. For example, my VBE driver (finally working, yay!) has an ioctl command to switch mode:

Code: Select all

uint16_t mode = 0x114;
int fd;

fd = open("/Devices/vbe", O_RDWR);
if(fd < 0) {
	perror("open");
	exit(1);
}

ioctl(fd, 1, &mode);
That'd switch to mode 0x114 (800x600x16). Then you can write to the framebuffer with a write call, or mmap the framebuffer and just write to it in memory.

Re: character and block - best representation of devices?

Posted: Mon Jan 07, 2008 2:14 am
by Brynet-Inc
Just as the person above mentioned, ioctl has been around since the early AT&T days of Unix, for the purpose you've described.
MessiahAndrw wrote:There are many move examples of this, such as setting the burn speed and ejecting the tray of an optical drive.
OpenBSD for example has a utility called "cdio", A nice little tool that allows you to play a music CD, burn a CD.. eject a CD etc..

Looking through the source, I can ascertain that it opens a file descriptor to the CD-ROM's character device, and 2 ioctl calls are made: ioctl(fd, CDIOCALLOW); and then ioctl(fd, CDIOCEJECT);

Both CDIOCALLOW and CDIOCEJECT are defined in sys/cdio.h (On OpenBSD anyway..).
MessiahAndrw wrote:The same problem could apply to a sound device. You could represent this using character device to constantly stream raw sound into the device. But how do you set the bit-rate, how do you change the volume, set if the data is a uncompressed, compressed, or MIDI stream?
The same basic principles apply, but the API for sound cards is complex.. and their are utilities for configuring variable sound card settings. (audioctl/mixerctl.. etc). Unlike cdio, audioctl and mixerctl open "pseudo" /dev/audioctl and /dev/mixer character devices, all a part of OpenBSD's "audio framework".

Code: Select all

[brynet@ttyp1]~: $ dmesg | grep emu0
emu0 at pci0 dev 13 function 0 "Creative Labs SoundBlaster Live" rev 0x08: irq 11
audio0 at emu0
[brynet@ttyp1]~: $ audioctl name         
name=SB Live!
[brynet@ttyp1]~: $ mixerctl outputs.bass 
outputs.bass=255
[brynet@ttyp1]~: $ 
Other Unix systems have different frameworks... frameworks that can build upon and work with other frameworks, It's all quite harmonious. :D

The "Unix philosophy" will never die my friend.. 8)