Free for All by Peter Wayner (good books to read for young adults .TXT) π
Linux users also bragged about the quality of their desktop interface. Most of the uninitiated thought of Linux as a hacker's system built for nerds. Yet recently two very good operating shells called GNOME and KDE had taken hold. Both offered the user an environment that looked just like Windows but was better. Linux hackers started bragging that they were able to equip their girlfriends, mothers, and friends with Linux boxes without grief. Some people with little computer experience were adopting Linux with little trouble.
Building websites and supercomputers is not an easy task, and it is often done in back rooms out of the sight of most people. When people began realizing that the free software hippies had slowly managed to take over a large chunk of the web server and supercomputing world, they realized that perhaps Microsoft's claim was viable. Web servers and su
Read free book Β«Free for All by Peter Wayner (good books to read for young adults .TXT) πΒ» - read online or download for free at americanlibrarybooks.com
- Author: Peter Wayner
- Performer: 0066620503
Read book online Β«Free for All by Peter Wayner (good books to read for young adults .TXT) πΒ». Author - Peter Wayner
Still, Torvalds had high ambitions. He was writing a toy, but he wanted it to have many, if not all, of the features found in full-strength UNIX versions on the market. On July 3, he started wondering how to accomplish this and placed a posting on the USENET newsgroup comp.os.minix, writing:
Hello netlanders, Due to a project I'm working on (in minix), I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.
Torvalds's question was pretty simple. When he wrote the message in 1991, UNIX was one of the major operating systems in the world. The project that started at AT&T and Berkeley was shipping on computers from IBM, Sun, Apple, and most manufacturers of higher-powered machines known as workstations. Wall Street banks and scientists loved the more powerful machines, and they loved the simplicity and hackability of UNIX machines. In an attempt to unify the marketplace, computer manufacturers created a way to standardize UNIX and called it POSIX. POSIX ensured that each UNIX machine would behave in a standardized way.
Torvalds worked quickly. By September he was posting notes to the group with the subject line "What would you like to see most in Minix?" He was adding features to his clone, and he wanted to take a poll about where he should add next.
Torvalds already had some good news to report. "I've currently ported bash(1.08) and GCC(1.40), and things seem to work. This implies that I'll get something practical within a few months," he said.
At first glance, he was making astounding progress. He created a working system with a compiler in less than half a year. But he also had the advantage of borrowing from the GNU project. Stallman's GNU project group had already written a compiler (GCC) and a nice text user interface (bash). Torvalds just grabbed these because he could. He was standing on the shoulders of the giants who had come before him.
The core of an OS is often called the "kernel," which is one of the strange words floating around the world of computers. When people are being proper, they note that Linus Torvalds was creating the Linux kernel in 1991. Most of the other software, like the desktop, the utilities, the editors, the web browsers, the games, the compilers, and practically everything else, was written by other folks. If you measure this in disk space, more than 95 percent of the code in an average distribution lies outside the kernel. If you measure it by user interaction, most people using Linux or BSD don't even know that there's a kernel in there. The buttons they click, the websites they visit, and the printing they do are all controlled by other programs that do the work.
Of course, measuring the importance of the kernel this way is stupid. The kernel is sort of the combination of the mail room, boiler room, kitchen, and laundry room for a computer. It's responsible for keeping the data flowing between the hard drives, the memory, the printers, the video screen, and any other part that happens to be attached to the computer.
In many respects, a well-written kernel is like a fine hotel. The guests check in, they're given a room, and then they can order whatever they need from room service and a smoothly oiled concierge staff. Is this new job going to take an extra 10 megabytes of disk space? No problem, sir. Right away, sir. We'll be right up with it. Ideally, the software won't even know that other software is running in a separate room. If that other program is a loud rock-and-roll MP3 playing tool, the other software won't realize that when it crashes and burns up its own room. The hotel just cruises right along, taking care of business.
In 1991, Torvalds had a short list of features he wanted to add to the kernel. The Internet was still a small network linking universities and some advanced labs, and so networking was a small concern. He was only aiming at the 386, so he could rely on some of the special features that weren't available on other chips. High-end graphics hardware cards were still pretty expensive, so he concentrated on a text-only interface. He would later fix all of these problems with the help of the people on the Linux kernel mailing list, but for now he could avoid them.
Still, hacking the kernel means anticipating what other programmers might do to ruin things. You don't know if someone's going to try to snag all 128 megabytes of RAM available. You don't know if someone's going to hook up a strange old daisy-wheel printer and try to dump a PostScript file down its throat. You don't know if someone's going to create an endless loop that's going to write random numbers all over the memory. Stupid programmers and dumb users do these things every day, and you've got to be ready for it. The kernel of the OS has to keep things flowing smoothly between all the different parts of the system. If one goes bad because of a sloppy bit of code, the kernel needs to cut it off like a limb that's getting gangrene. If one job starts heating up, the kernel needs to try to give it all the resources it can so the user will be happy. The kernel hacker needs to keep all of these things straight.
Creating an operating system like this is no easy job. Many of the commercial systems crash frequently for no perceptible reason, and most of the public just takes it.[^4] Many people somehow assume that it must be their fault that the program failed. In reality, it's probably the kernel's. Or more precisely, it's the kernel designer's fault for not anticipating what could go wrong.
[4]: Microsoft now acknowledges the existence of a bug in the tens of millions of copies of Windows 95 and Windows 98 that will cause your computer to 'stop responding (hang)'--you know, what you call crash--after exactly 49 days, 17 hours, 2 minutes, and 47.296 seconds of continuous operation. .. . Why 49.7? days? Because computers aren't counting the days. They're counting the milliseconds. One counter begins when Windows starts up; when it gets to 232 milliseconds--which happens to be 49.7 days--well, that's the biggest number this counter can handle. And instead of gracefully rolling over and starting again at zero, it manages to bring the entire operating system to a halt."--James Gleick in the New York Times.
By the mid-1970s, companies and computer scientists were already experimenting with many different ways to create workable operating systems. While the computers of the day weren't very powerful by modern standards, the programmers created operating systems that let tens if not hundreds of people use a machine simultaneously. The OS would keep the different tasks straight and make sure that no user could interfere with another.
As people designed more and more operating systems, they quickly realized that there was one tough question: how big should it be? Some people argued that the OS should be as big as possible and come complete with all the features that someone might want to use. Others countered with stripped-down designs that came with a small core of the OS surrounded by thousands of little programs that did the same thing.
To some extent, the debate is more about semantics than reality. A user wants the computer to be able to list the different files stored in one directory. It doesn't matter if the question is answered by a big operating system that handles everything or a little operating system that uses a program to find the answer. The job still needs to be done, and many of the instructions are the same. It's just a question of whether the instructions are labeled the "operating system" or an ancillary program.
But the debate is also one about design. Programmers, teachers, and the Lego company all love to believe that any problem can be solved by breaking it down into small parts that can be assembled to create the whole. Every programmer wants to turn the design of an operating system into thousands of little problems that can be solved individually. This dream usually lasts until someone begins to assemble the parts and discovers that they don't work together as perfectly as they should.
When Torvalds started crafting the Linux kernel, he decided he was going to create a bigger, more integrated version that he called a "monolithic kernel." This was something of a bold move because the academic community was entranced with what they called "microkernels." The difference is partly semantic and partly real, but it can be summarized by analogy with businesses. Some companies try to build large, smoothly integr the steps of production. Others try to create smaller operations that subcontract much of the production work to other companies. One is big, monolithic, and all-encompassing, while the other is smaller, fragmented, and heterogeneous. It's not uncommon to find two companies in the same industry taking different approaches and thinking they're doing the right thing.
The design of an operating system often boils down to the same decision. Do we want to build a monolithic core that handles all the juggling internally, or do we want a smaller, more fragmented model that should be more flexible as long as the parts interact correctly?
In time, the OS world started referring to this core as the kernel of the operating system. People who wanted to create big OSs with many features wrote monolithic kernels. Their ideological enemies who wanted to break the OS into hundreds of small programs running on a small core wrote microkernels. Some of the most extreme folks labeled their work a nanokernel because they thought it did even less and thus was even more pure than those bloated microkernels.
The word "kernel" is a bit confusing for most people because they often use it to mean a fragment of an object or a small fraction. An extreme argument may have a kernel of truth to it. A disaster movie always gives the characters and the audience a kernel of hope to which to cling.
Mathematicians use the word a bit differently and emphasize the word's ability to let a small part define a larger concept. Technically, a kernel of a function f is the set of values, x[1], x[2],. .. x[n] such that f(x[i])=1, or whatever the identity element happens to be. The action of the kernel of a function does a good job of defining how the function behaves with all the other elements. The algebraists study a kernel of a function because it reveals the overall behavior.[^5]
[5]: The kernel of f(x)=x[2] is (-1, 1) and it illustrates how the function has two branches.
The OS designers use the word in the same way. If they define the kernel correctly, then the behavior of the rest of the OS will follow. The small part of the code defines the behavior of the entire computer. If the kernel does one thing well, the entire computer will do it well. If it does one thing badly, then everything will suffer.
Many
Comments (0)