Well, to begin with, the [tt]Writer[/tt] and [tt]Reader[/tt] classes are completely abstract; you cannot instantiate them directly at all. They serve only as the root of the character I/O hierarchy, to provide those methods common to
all character I/O functions.
The BufferedReader and BufferedWriter functions, as the names imply, specifically provide for buffering of the data; for example, a write() call using a BufferedWriter would not be written out at the time of the call, but would go into a buffer until either the buffer was filled, or until a flush() call was made to the same BufferedWriter (I don't know offhand if it will flush() automatically if the program closes first; I would expect so, but it would be a terrible practice to rely on it). Similarly, a read() call with a BufferReader would read a whole buffer's worth of data, rather than just what you requested, so that subsequent requests for the data following it would not require a full read of their own.
Since the BufferedReader and BufferedWriter classes do not implement actual I/O interfaces, they are not used directly, but are usually as 'filter' classes to add buffering to a class that usually uses raw-mode I/O. For example, a common idiom for file writing would be:
Code: Select all
PrintWriter pw = new PrintWriter(new BufferedWriter(new FileWriter("myfile.txt");
pw.println("Hello, World!");
pw.println("This still hasn't been written to disk yet, has it?");
pw.println("Nope, but it will be after the flush() call...");
pw.flush(); // force output
pw.println("Those last three lines were just written, but this one hasn't been yet.");
pw.flush();
pw.close();
This is done for efficiency's sake primarily, as many kinds of I/O operations (e.g., disk reads and writes) have an inherent per-operation overhead, such that reducing the number of actual operations via buffering will greatly speed up overall throughput.