Summary
When I first saw UNIX, it was V7 UNIX on a Perkin Elmer 3230 at Oklahoma State University. I had been reprimanded for using the class account for research, and was given an account on the department research machine so that I wouldn't cause the account to run dry again.
Advertisement
When you get in trouble at college, it makes it hard to stay on the good side of your instructors. And, when your classmates find out it was your fault that the account ran out of money on the night they were trying to get their work done, you don't win friends either.
Would You Like a Fork With That Dup
I invested time in learning to use that UNIX stuff, program in the C-language, and learn to appreciate the power of fork(2), execl(2) and dup(2) for creating multi-process applications. These three UNIX system calls provide the most extreme power for multi-tasking. I wrote a shell called vish(1). It was a vi(1) editor like screen, that contained the history of the commands I ran. I could search, edit and resubmit my commands, and otherwise have a great way to remind myself of what I had been doing, or how to do something arcane that I had just learned yesterday, and needed to continue working on today.
VMS is Not UNIX
I quickly became trained in the arts of the vi(1) editor. When Ned Freed left OSU to go back to Claremont College with PMDF and his grand ideas about email, I took up his position in the Math Dept managing the Math/Stat VAX 11-750 computer run VMS 4.x. I immediately had to deal with a number of issues. That little machine was being beat to death by the grad students running simulations and extended calculations using MathLib. The CPU was pegged for days, and swapping seemed to be ready to take the poor disk drives apart.
I had at my disposal, Fortran and Vax Assembler. I now had to invest a lot more time to learn all about this VMS stuff. But, the good thing was that I got paid to learn! I did my studying, and wrote a long term scheduler, called sched, that ran in the background at elevated priority, using about 3% of the CPU. It watched over processes and adjusted priorities of any long running process whether interactive or not. It made sure that when you sat down to have a nice interactive session with the computer, you got good response to your editing and simple commands. I learned a lot about scheduling of CPU resources, how bad typical priority adjustment schemes really do at managing CPU use, with such sort sighted schemes (priorty and quantum use, and quanta termination reasons).
This was in 1986. In this day and age, little has changed about how scheduling is done. The CPUs are much faster now, and thus 50%, or 30% or 25% of the CPU, given to a use because there are 1, 2 or 3 other 100% CPU bound processes is still enough to make people think that things are going okay.
How About a Billion CPUs
Distributed CPU farms provide insight into how CPU utilization could be better managed, using cheaper systems to provide a very reasonable amount of CPU resource. As long as there are more CPUs than 100% bound processes, everyone feels no pain. Interactive processing will go to idle processors, and thus you get instance response. OSes with more long term view of process behavior would see that a process was not any more likely to complete its execution in this quantum, than it did in the last, and just adjust the priority down a notch. Then interactive users get full benefit of CPU time, because they can get multiple quantum of use, without having to compete with 100% bound processes.
Many different operating systems implement scheduling differently. Too bad we don't have 100% compatibility of programming languages across all Operating systems so that you could pick the scheduling that you wanted!