Jul 302023
 

Ah yes! Well the first thing to answer is what a terminal is.

A terminal is a device for communicating with text (graphics was possible but relatively rare especially in the early days) with a computer – you would type in a command in text and the computer would respond in text :-

» ls
1  2  bad-directory

Although the “terminal” is still available today in the form of a gooey program, the early terminals communicated with the computer with some form of serial port (usually RS232). The first terminals were modified teleprinters (often called “Teletypes” due to the domination of that company in the USA). These were large electromechanical devices where the display was paper – they were printing terminals.

The first terminals that displayed on a screen were very much like the printing terminals – they would “print” output from the computer on the last line of the screen and scroll for additional lines. Just like on a printing terminal except that once things scrolled off the top of the screen they were lost.

At this point in computing history, we’re just at the start of the microcomputer age; in fact one of the uses for which Intel’s second processor (the 8008) was developed was to operate as the heart of a computer terminal.

As the microprocessor controlled terminal was essentially run by software, programmers started adding in new features that would do things like clear the screen, move the cursor around the screen so you could display text anywhere you wanted.

At this point one definition of “dumb terminal” can be found – a terminal that just emulated a printing terminal was a dumb terminal; ones with additional features weren’t so dumb.

As the 1970s progressed, terminals gained more and more features and eventually some became capable of downloading software from the computer they were connected to and running that software locally. Such as (optionally) the HP 2647. Or the Bell Labs blit terminal.

Such terminals could be termed “smart” and their predecessors “dumb”. And if you notice a similarity with the somewhat later “thin clients“, you wouldn’t be entirely wrong.

Alternatively, some terminals (such as the IBM “green screen” terminals) operated in block mode where the terminal would allow a certain amount of editing within the terminal and send the result back to the computer a screen at a time. These necessarily had to have a certain amount of “smarts” built in, so they were smarter than character at a time terminals (thus “dumb”).

"Dumb" Terminal
A “dumb” terminal

So to an extent there is no real agreement on what a “dumb terminal” really is. Pick one that you like!

Aug 142021
 

Today if you are a Linux user and fire up a terminal window to “do something” at the command-line, you are using a gooey program to emulate an old terminal which was separate to the computer.

Today you are almost always using a keyboard and screen connected directly to the computer you are using and the gooey program you fire up as a terminal is in fact originally called a terminal emulator. That is, it pretends to be a real terminal.

So what were these real terminals?

The earliest “terminals” were actually teletypes for communicating text messages over long distances (over wires!). Not only was there no digital computer involved, but they predate computers by quite a way – the earliest ones were used in the late 19th century. And of course printed the text onto paper directly. The earliest digital computers used these teletypes as input and output devices, so you could type in commands and see the result immediately (or as quickly as the result could be produced). These early days still leave some traces today :-

✓ mike@Lime» tty
/dev/pts/5

The “tty” command commemorates those old printing terminals – the “tt” in “tty” is short for “teletype”.

The speed and wasted paper of those printing terminals was a bit tedious, so the 1970s saw them gradually replaced with glass teletypes – which were basically keyboards and CRT screens built into an enclosure that would attach to a central computer over a serial line.

binary comment

These terminals (and showing an ADM 3A here is a little unfair as it wasn’t quite this simple) were really simple – they had exactly the same capabilities as the printing terminals. No cursor control (meaning no full screen editing), plain text, no italics or bold, etc.

Over time, more and more features were added to the terminal allowing more usable software (in particular the learning curve was not quite as steep). These features grew to accommodate colour, graphics, the ability to load and save data locally, and even the ability to function as a microcomputer (the HP pictured below could run CP/M in certain configurations).

But where did they go?

The heyday of the terminal was in the 1980s when many office-based companies were busy trying to put something like a computer on every desk, and a terminal connected to a central computer was one way of doing that. But they compared rather poorly with microcomputers – typically very slow in comparison, less likely to offer any kind of graphics (graphics was an option but typically cost as much as a microcomputer), and just wasn’t very “cool”.

Despite several attempts at resurrecting them (they were popular amongst those who had to centrally support them), they never really returned.

But they do survive inside modern operating systems in terms of a terminal emulator (as mentioned previously) to access the operating system command line – all three main operating systems (Windows, macOS, and Linux) have a terminal emulator of sorts. And Microsoft is actually investing in re-engineering their terminal emulator.

Jan 252019
 

If you are using the right kind of terminal that supports graphics inline (such as KiTTY), then you can write simple (or complex) tools that insert images into the terminal.

Being able to display the flag of a country (if you know its two-letter ISO code) is kind of trivial but useful if you need it.

And a shell function to do that is remarkably simple :-

function flag {
    wget -o /dev/null -O /var/tmp/flag.$$ http://flagpedia.net/data/flags/normal/${1}.png
    if [ $? -eq 0 ]
    then
        kitty +kitten icat /var/tmp/flag.$$ && rm /var/tmp/flag.$$
    else
        echo Not found
    fi
}

(that’s a Zsh function which may require adaption to Bash).

Jan 072007
 

I recently replaced an elderly SGI Octane2 workstation which had 2 CPUs (400MHz MIPS-based), 1.5Gbytes of memory, and 3 elderly SCSI disks with a nice new Sun Ultra40 … 2 AMD Opteron 248s, 2Gbytes memory, and 2 mirrored SATA drives. It is interesting to compare the difference between an old-fashioned workstation originally designed in the middle to late 1990s with a 21st century PC. Not that I’m going to produce hard numbers from useful benchmarks … that is just too much work, and in some ways it is the feel of the differences that are important.

Of course this is not really a fair comparison. Whilst the SGI Octane is now very elderly and due to SGI managerial incompetence has not kept pace with PC performance as it should have done, it is after all a machine that originally cost 10-20 times the cost of the PC I am comparing it to. In car terms, I’m comparing a 20-year old Mercedes with a new and cheap Ford. I should point out that much of the software I am using is very much the same on both machines … the Enlightenment window manager, Sylpheed Claws as the mail client, Firefox as the browser, LyX as the word processor, and a text terminal for much of the remainder.

The PC is considerably quicker than the SGI of course. The graphic user interface is a good deal snappier, and most of the applications offer very welcome improvements in performance. With the exception of GIMP however, none of this performance increase is really essential; my old SGI ran pretty much everything my PC does, fast enough to get the job done. GIMP performance is the reason I upgraded, and here the difference is quite dramatic … filters that previous required patience now run almost instantly; when you are repeatedly trying things out in GIMP on quite large images this performance increase makes some things feasible that simply were not before.

There is one area where the SGI does offer some advantage over the PC; something I was expecting. The PCs disks are overall somewhat faster the the disks in the SGI (and of course I don’t have to pay to mirror my disks!), but the SGI tends to work more smoothly under high load. I’ve noticed before with the ‘low end’ on disks in PCs, that if you start to drive your disks very hard, the computer will sometimes stutter. Essentially the SGI was slower, but smoother under high disk load than the PC.

If was not for the need to run GIMP extensively (and the appeal of more standard add-on hardware like USB hard disks), there is no reason why I could not continue with the SGI. The tendency we have in the computing arena of replacing computers every few years is not a healthy one.